Mar 6 03:01:18.105512 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 5 23:16:40 -00 2026 Mar 6 03:01:18.105576 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 03:01:18.105601 kernel: BIOS-provided physical RAM map: Mar 6 03:01:18.105615 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Mar 6 03:01:18.105629 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Mar 6 03:01:18.105643 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Mar 6 03:01:18.105661 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Mar 6 03:01:18.105676 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Mar 6 03:01:18.105690 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd2e4fff] usable Mar 6 03:01:18.105708 kernel: BIOS-e820: [mem 0x00000000bd2e5000-0x00000000bd2eefff] ACPI data Mar 6 03:01:18.105723 kernel: BIOS-e820: [mem 0x00000000bd2ef000-0x00000000bf8ecfff] usable Mar 6 03:01:18.105737 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Mar 6 03:01:18.105753 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Mar 6 03:01:18.105768 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Mar 6 03:01:18.105786 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Mar 6 03:01:18.105807 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Mar 6 03:01:18.105823 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Mar 6 03:01:18.105840 kernel: NX (Execute Disable) protection: active Mar 6 03:01:18.105856 kernel: APIC: Static calls initialized Mar 6 03:01:18.105872 kernel: efi: EFI v2.7 by EDK II Mar 6 03:01:18.105889 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 RNG=0xbfb73018 TPMEventLog=0xbd2e5018 Mar 6 03:01:18.105904 kernel: random: crng init done Mar 6 03:01:18.105920 kernel: secureboot: Secure boot disabled Mar 6 03:01:18.105936 kernel: SMBIOS 2.4 present. Mar 6 03:01:18.105953 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Mar 6 03:01:18.105972 kernel: DMI: Memory slots populated: 1/1 Mar 6 03:01:18.105989 kernel: Hypervisor detected: KVM Mar 6 03:01:18.106004 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 6 03:01:18.106020 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 6 03:01:18.106037 kernel: kvm-clock: using sched offset of 15346878352 cycles Mar 6 03:01:18.106054 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 6 03:01:18.106071 kernel: tsc: Detected 2299.998 MHz processor Mar 6 03:01:18.106088 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 6 03:01:18.106105 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 6 03:01:18.106122 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Mar 6 03:01:18.106149 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Mar 6 03:01:18.106166 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 6 03:01:18.106182 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 6 03:01:18.106223 kernel: Using GB pages for direct mapping Mar 6 03:01:18.106240 kernel: ACPI: Early table checksum verification disabled Mar 6 03:01:18.106265 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Mar 6 03:01:18.106282 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Mar 6 03:01:18.106304 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Mar 6 03:01:18.106321 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Mar 6 03:01:18.106339 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Mar 6 03:01:18.106357 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Mar 6 03:01:18.106375 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Mar 6 03:01:18.106392 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Mar 6 03:01:18.106410 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Mar 6 03:01:18.106431 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Mar 6 03:01:18.106448 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Mar 6 03:01:18.106466 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Mar 6 03:01:18.106484 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Mar 6 03:01:18.106500 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Mar 6 03:01:18.106517 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Mar 6 03:01:18.106534 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Mar 6 03:01:18.106551 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Mar 6 03:01:18.106569 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Mar 6 03:01:18.106590 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Mar 6 03:01:18.106608 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Mar 6 03:01:18.106625 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 6 03:01:18.106643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Mar 6 03:01:18.106660 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Mar 6 03:01:18.106678 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Mar 6 03:01:18.106696 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Mar 6 03:01:18.106714 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Mar 6 03:01:18.106732 kernel: Zone ranges: Mar 6 03:01:18.106753 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 6 03:01:18.106771 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 6 03:01:18.106788 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Mar 6 03:01:18.106806 kernel: Device empty Mar 6 03:01:18.106822 kernel: Movable zone start for each node Mar 6 03:01:18.106840 kernel: Early memory node ranges Mar 6 03:01:18.106857 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Mar 6 03:01:18.106874 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Mar 6 03:01:18.106892 kernel: node 0: [mem 0x0000000000100000-0x00000000bd2e4fff] Mar 6 03:01:18.106913 kernel: node 0: [mem 0x00000000bd2ef000-0x00000000bf8ecfff] Mar 6 03:01:18.106930 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Mar 6 03:01:18.106947 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Mar 6 03:01:18.106965 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Mar 6 03:01:18.106982 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 03:01:18.107000 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Mar 6 03:01:18.107018 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Mar 6 03:01:18.107035 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Mar 6 03:01:18.107053 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 6 03:01:18.107074 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Mar 6 03:01:18.107092 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 6 03:01:18.107109 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 6 03:01:18.107127 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 6 03:01:18.107152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 6 03:01:18.107170 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 6 03:01:18.107188 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 6 03:01:18.107623 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 6 03:01:18.107642 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 6 03:01:18.107666 kernel: CPU topo: Max. logical packages: 1 Mar 6 03:01:18.107684 kernel: CPU topo: Max. logical dies: 1 Mar 6 03:01:18.107702 kernel: CPU topo: Max. dies per package: 1 Mar 6 03:01:18.107719 kernel: CPU topo: Max. threads per core: 2 Mar 6 03:01:18.107737 kernel: CPU topo: Num. cores per package: 1 Mar 6 03:01:18.107755 kernel: CPU topo: Num. threads per package: 2 Mar 6 03:01:18.107773 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 6 03:01:18.107791 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 6 03:01:18.107808 kernel: Booting paravirtualized kernel on KVM Mar 6 03:01:18.107825 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 6 03:01:18.107846 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 6 03:01:18.107864 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 6 03:01:18.107882 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 6 03:01:18.107898 kernel: pcpu-alloc: [0] 0 1 Mar 6 03:01:18.107915 kernel: kvm-guest: PV spinlocks enabled Mar 6 03:01:18.107933 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 6 03:01:18.107952 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 03:01:18.107968 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 6 03:01:18.107988 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 6 03:01:18.108004 kernel: Fallback order for Node 0: 0 Mar 6 03:01:18.108021 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Mar 6 03:01:18.108037 kernel: Policy zone: Normal Mar 6 03:01:18.108054 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 03:01:18.108072 kernel: software IO TLB: area num 2. Mar 6 03:01:18.108104 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 6 03:01:18.108126 kernel: Kernel/User page tables isolation: enabled Mar 6 03:01:18.108154 kernel: ftrace: allocating 40099 entries in 157 pages Mar 6 03:01:18.108173 kernel: ftrace: allocated 157 pages with 5 groups Mar 6 03:01:18.108207 kernel: Dynamic Preempt: voluntary Mar 6 03:01:18.108226 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 03:01:18.108249 kernel: rcu: RCU event tracing is enabled. Mar 6 03:01:18.108268 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 6 03:01:18.108286 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 03:01:18.108305 kernel: Rude variant of Tasks RCU enabled. Mar 6 03:01:18.108323 kernel: Tracing variant of Tasks RCU enabled. Mar 6 03:01:18.108346 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 03:01:18.108364 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 6 03:01:18.108383 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 6 03:01:18.108402 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 6 03:01:18.108421 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 6 03:01:18.108439 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 6 03:01:18.108457 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 03:01:18.108476 kernel: Console: colour dummy device 80x25 Mar 6 03:01:18.108499 kernel: printk: legacy console [ttyS0] enabled Mar 6 03:01:18.108517 kernel: ACPI: Core revision 20240827 Mar 6 03:01:18.108536 kernel: APIC: Switch to symmetric I/O mode setup Mar 6 03:01:18.108554 kernel: x2apic enabled Mar 6 03:01:18.108573 kernel: APIC: Switched APIC routing to: physical x2apic Mar 6 03:01:18.108591 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Mar 6 03:01:18.108610 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 6 03:01:18.108629 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Mar 6 03:01:18.108648 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Mar 6 03:01:18.108671 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Mar 6 03:01:18.108690 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 6 03:01:18.108709 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Mar 6 03:01:18.108727 kernel: Spectre V2 : Mitigation: IBRS Mar 6 03:01:18.108745 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 6 03:01:18.108763 kernel: RETBleed: Mitigation: IBRS Mar 6 03:01:18.108782 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 6 03:01:18.108800 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Mar 6 03:01:18.108819 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 6 03:01:18.108841 kernel: MDS: Mitigation: Clear CPU buffers Mar 6 03:01:18.108860 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 03:01:18.108878 kernel: active return thunk: its_return_thunk Mar 6 03:01:18.108896 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 6 03:01:18.108914 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 6 03:01:18.108932 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 6 03:01:18.108951 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 6 03:01:18.108969 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 6 03:01:18.108988 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 6 03:01:18.109011 kernel: Freeing SMP alternatives memory: 32K Mar 6 03:01:18.109029 kernel: pid_max: default: 32768 minimum: 301 Mar 6 03:01:18.109048 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 6 03:01:18.109067 kernel: landlock: Up and running. Mar 6 03:01:18.109085 kernel: SELinux: Initializing. Mar 6 03:01:18.109104 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 6 03:01:18.109122 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 6 03:01:18.109141 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Mar 6 03:01:18.109167 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Mar 6 03:01:18.109190 kernel: signal: max sigframe size: 1776 Mar 6 03:01:18.111151 kernel: rcu: Hierarchical SRCU implementation. Mar 6 03:01:18.111173 kernel: rcu: Max phase no-delay instances is 400. Mar 6 03:01:18.111231 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 6 03:01:18.111253 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 6 03:01:18.111272 kernel: smp: Bringing up secondary CPUs ... Mar 6 03:01:18.111291 kernel: smpboot: x86: Booting SMP configuration: Mar 6 03:01:18.111309 kernel: .... node #0, CPUs: #1 Mar 6 03:01:18.111328 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 6 03:01:18.111356 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 6 03:01:18.111375 kernel: smp: Brought up 1 node, 2 CPUs Mar 6 03:01:18.111394 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Mar 6 03:01:18.111414 kernel: Memory: 7555812K/7860544K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 298900K reserved, 0K cma-reserved) Mar 6 03:01:18.111432 kernel: devtmpfs: initialized Mar 6 03:01:18.111451 kernel: x86/mm: Memory block size: 128MB Mar 6 03:01:18.111470 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Mar 6 03:01:18.111489 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 03:01:18.111511 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 6 03:01:18.111531 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 03:01:18.111549 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 03:01:18.111568 kernel: audit: initializing netlink subsys (disabled) Mar 6 03:01:18.111587 kernel: audit: type=2000 audit(1772766074.230:1): state=initialized audit_enabled=0 res=1 Mar 6 03:01:18.111605 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 03:01:18.111624 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 6 03:01:18.111643 kernel: cpuidle: using governor menu Mar 6 03:01:18.111662 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 03:01:18.111684 kernel: dca service started, version 1.12.1 Mar 6 03:01:18.111702 kernel: PCI: Using configuration type 1 for base access Mar 6 03:01:18.111721 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 6 03:01:18.111740 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 03:01:18.111758 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 03:01:18.111777 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 03:01:18.111805 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 03:01:18.111824 kernel: ACPI: Added _OSI(Module Device) Mar 6 03:01:18.111842 kernel: ACPI: Added _OSI(Processor Device) Mar 6 03:01:18.111864 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 03:01:18.111883 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 6 03:01:18.111902 kernel: ACPI: Interpreter enabled Mar 6 03:01:18.111921 kernel: ACPI: PM: (supports S0 S3 S5) Mar 6 03:01:18.111940 kernel: ACPI: Using IOAPIC for interrupt routing Mar 6 03:01:18.111959 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 6 03:01:18.111977 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 6 03:01:18.111996 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Mar 6 03:01:18.112015 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 6 03:01:18.112319 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 6 03:01:18.112514 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 6 03:01:18.112698 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 6 03:01:18.112721 kernel: PCI host bridge to bus 0000:00 Mar 6 03:01:18.112896 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 6 03:01:18.113068 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 6 03:01:18.113313 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 6 03:01:18.113483 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Mar 6 03:01:18.113648 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 6 03:01:18.113850 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Mar 6 03:01:18.114052 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Mar 6 03:01:18.114372 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Mar 6 03:01:18.114572 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 6 03:01:18.114783 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Mar 6 03:01:18.114976 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Mar 6 03:01:18.115178 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Mar 6 03:01:18.115404 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 6 03:01:18.115597 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Mar 6 03:01:18.115811 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Mar 6 03:01:18.116011 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 6 03:01:18.116229 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Mar 6 03:01:18.116417 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Mar 6 03:01:18.116438 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 6 03:01:18.116456 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 6 03:01:18.116474 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 6 03:01:18.116492 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 6 03:01:18.116510 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 6 03:01:18.116534 kernel: iommu: Default domain type: Translated Mar 6 03:01:18.116553 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 6 03:01:18.116572 kernel: efivars: Registered efivars operations Mar 6 03:01:18.116590 kernel: PCI: Using ACPI for IRQ routing Mar 6 03:01:18.116608 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 6 03:01:18.116626 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Mar 6 03:01:18.116645 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Mar 6 03:01:18.116663 kernel: e820: reserve RAM buffer [mem 0xbd2e5000-0xbfffffff] Mar 6 03:01:18.116680 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Mar 6 03:01:18.116697 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Mar 6 03:01:18.116719 kernel: vgaarb: loaded Mar 6 03:01:18.116738 kernel: clocksource: Switched to clocksource kvm-clock Mar 6 03:01:18.116755 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 03:01:18.116773 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 03:01:18.116790 kernel: pnp: PnP ACPI init Mar 6 03:01:18.116808 kernel: pnp: PnP ACPI: found 7 devices Mar 6 03:01:18.116827 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 6 03:01:18.116844 kernel: NET: Registered PF_INET protocol family Mar 6 03:01:18.116863 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 6 03:01:18.116885 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 6 03:01:18.116903 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 03:01:18.116921 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 6 03:01:18.116939 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 6 03:01:18.116956 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 6 03:01:18.116974 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 6 03:01:18.116991 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 6 03:01:18.117009 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 03:01:18.117027 kernel: NET: Registered PF_XDP protocol family Mar 6 03:01:18.117644 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 6 03:01:18.117840 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 6 03:01:18.118009 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 6 03:01:18.118184 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Mar 6 03:01:18.119488 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 6 03:01:18.119517 kernel: PCI: CLS 0 bytes, default 64 Mar 6 03:01:18.119535 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 6 03:01:18.119561 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Mar 6 03:01:18.119580 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 6 03:01:18.119600 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 6 03:01:18.119618 kernel: clocksource: Switched to clocksource tsc Mar 6 03:01:18.119636 kernel: Initialise system trusted keyrings Mar 6 03:01:18.119653 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 6 03:01:18.119670 kernel: Key type asymmetric registered Mar 6 03:01:18.119686 kernel: Asymmetric key parser 'x509' registered Mar 6 03:01:18.119703 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 6 03:01:18.119726 kernel: io scheduler mq-deadline registered Mar 6 03:01:18.119744 kernel: io scheduler kyber registered Mar 6 03:01:18.119762 kernel: io scheduler bfq registered Mar 6 03:01:18.119780 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 6 03:01:18.119799 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 6 03:01:18.120022 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Mar 6 03:01:18.120048 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Mar 6 03:01:18.123311 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Mar 6 03:01:18.123345 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 6 03:01:18.123560 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Mar 6 03:01:18.123585 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 03:01:18.123605 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 6 03:01:18.123622 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 6 03:01:18.123640 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Mar 6 03:01:18.123656 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Mar 6 03:01:18.123852 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Mar 6 03:01:18.123878 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 6 03:01:18.123901 kernel: i8042: Warning: Keylock active Mar 6 03:01:18.123919 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 6 03:01:18.123936 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 6 03:01:18.124150 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 6 03:01:18.125410 kernel: rtc_cmos 00:00: registered as rtc0 Mar 6 03:01:18.125606 kernel: rtc_cmos 00:00: setting system clock to 2026-03-06T03:01:17 UTC (1772766077) Mar 6 03:01:18.125785 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 6 03:01:18.125818 kernel: intel_pstate: CPU model not supported Mar 6 03:01:18.125838 kernel: pstore: Using crash dump compression: deflate Mar 6 03:01:18.125857 kernel: pstore: Registered efi_pstore as persistent store backend Mar 6 03:01:18.125876 kernel: NET: Registered PF_INET6 protocol family Mar 6 03:01:18.125894 kernel: Segment Routing with IPv6 Mar 6 03:01:18.125913 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 03:01:18.125932 kernel: NET: Registered PF_PACKET protocol family Mar 6 03:01:18.125952 kernel: Key type dns_resolver registered Mar 6 03:01:18.125970 kernel: IPI shorthand broadcast: enabled Mar 6 03:01:18.125989 kernel: sched_clock: Marking stable (3784004467, 160728611)->(3984261514, -39528436) Mar 6 03:01:18.126010 kernel: registered taskstats version 1 Mar 6 03:01:18.126029 kernel: Loading compiled-in X.509 certificates Mar 6 03:01:18.126048 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 30893fe9fd219d26109af079e6493e1c8b1c00af' Mar 6 03:01:18.126066 kernel: Demotion targets for Node 0: null Mar 6 03:01:18.126085 kernel: Key type .fscrypt registered Mar 6 03:01:18.126103 kernel: Key type fscrypt-provisioning registered Mar 6 03:01:18.126121 kernel: ima: Allocated hash algorithm: sha1 Mar 6 03:01:18.126140 kernel: ima: No architecture policies found Mar 6 03:01:18.126168 kernel: clk: Disabling unused clocks Mar 6 03:01:18.126191 kernel: Warning: unable to open an initial console. Mar 6 03:01:18.129259 kernel: Freeing unused kernel image (initmem) memory: 46196K Mar 6 03:01:18.129280 kernel: Write protecting the kernel read-only data: 40960k Mar 6 03:01:18.129299 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 6 03:01:18.129318 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 6 03:01:18.129337 kernel: Run /init as init process Mar 6 03:01:18.129356 kernel: with arguments: Mar 6 03:01:18.129374 kernel: /init Mar 6 03:01:18.129392 kernel: with environment: Mar 6 03:01:18.129416 kernel: HOME=/ Mar 6 03:01:18.129435 kernel: TERM=linux Mar 6 03:01:18.129454 systemd[1]: Successfully made /usr/ read-only. Mar 6 03:01:18.129477 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 03:01:18.129498 systemd[1]: Detected virtualization google. Mar 6 03:01:18.129517 systemd[1]: Detected architecture x86-64. Mar 6 03:01:18.129534 systemd[1]: Running in initrd. Mar 6 03:01:18.129558 systemd[1]: No hostname configured, using default hostname. Mar 6 03:01:18.129579 systemd[1]: Hostname set to . Mar 6 03:01:18.129598 systemd[1]: Initializing machine ID from random generator. Mar 6 03:01:18.129617 systemd[1]: Queued start job for default target initrd.target. Mar 6 03:01:18.129637 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 03:01:18.129678 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 03:01:18.129703 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 03:01:18.129723 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 03:01:18.129744 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 03:01:18.129767 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 03:01:18.129789 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 03:01:18.129810 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 03:01:18.129833 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 03:01:18.129854 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 03:01:18.129875 systemd[1]: Reached target paths.target - Path Units. Mar 6 03:01:18.129895 systemd[1]: Reached target slices.target - Slice Units. Mar 6 03:01:18.129916 systemd[1]: Reached target swap.target - Swaps. Mar 6 03:01:18.129936 systemd[1]: Reached target timers.target - Timer Units. Mar 6 03:01:18.129957 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 03:01:18.129978 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 03:01:18.129999 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 03:01:18.130024 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 6 03:01:18.130044 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 03:01:18.130066 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 03:01:18.130087 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 03:01:18.130108 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 03:01:18.130129 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 03:01:18.130157 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 03:01:18.130176 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 03:01:18.131109 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 6 03:01:18.131136 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 03:01:18.131164 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 03:01:18.131184 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 03:01:18.131219 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 03:01:18.131279 systemd-journald[192]: Collecting audit messages is disabled. Mar 6 03:01:18.131336 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 03:01:18.131359 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 03:01:18.131381 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 03:01:18.131407 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 03:01:18.131432 systemd-journald[192]: Journal started Mar 6 03:01:18.131473 systemd-journald[192]: Runtime Journal (/run/log/journal/47fac20b687e484bab0617ad8032e1d7) is 8M, max 148.6M, 140.6M free. Mar 6 03:01:18.132163 systemd-modules-load[194]: Inserted module 'overlay' Mar 6 03:01:18.137248 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 03:01:18.146297 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 03:01:18.155253 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:01:18.164666 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 03:01:18.173369 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 03:01:18.183381 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 03:01:18.188258 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 03:01:18.195213 kernel: Bridge firewalling registered Mar 6 03:01:18.195429 systemd-modules-load[194]: Inserted module 'br_netfilter' Mar 6 03:01:18.199719 systemd-tmpfiles[205]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 6 03:01:18.200959 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 03:01:18.207398 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 03:01:18.220279 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 03:01:18.221373 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 03:01:18.226652 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 03:01:18.234475 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 03:01:18.236901 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 03:01:18.257371 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 03:01:18.278785 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 03:01:18.332000 systemd-resolved[230]: Positive Trust Anchors: Mar 6 03:01:18.332550 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 03:01:18.332783 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 03:01:18.339141 systemd-resolved[230]: Defaulting to hostname 'linux'. Mar 6 03:01:18.343808 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 03:01:18.350404 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 03:01:18.402244 kernel: SCSI subsystem initialized Mar 6 03:01:18.414233 kernel: Loading iSCSI transport class v2.0-870. Mar 6 03:01:18.426237 kernel: iscsi: registered transport (tcp) Mar 6 03:01:18.451351 kernel: iscsi: registered transport (qla4xxx) Mar 6 03:01:18.451421 kernel: QLogic iSCSI HBA Driver Mar 6 03:01:18.474729 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 03:01:18.493369 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 03:01:18.500121 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 03:01:18.560290 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 03:01:18.563318 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 03:01:18.622231 kernel: raid6: avx2x4 gen() 18036 MB/s Mar 6 03:01:18.639233 kernel: raid6: avx2x2 gen() 18071 MB/s Mar 6 03:01:18.656594 kernel: raid6: avx2x1 gen() 14194 MB/s Mar 6 03:01:18.656641 kernel: raid6: using algorithm avx2x2 gen() 18071 MB/s Mar 6 03:01:18.674636 kernel: raid6: .... xor() 18627 MB/s, rmw enabled Mar 6 03:01:18.674682 kernel: raid6: using avx2x2 recovery algorithm Mar 6 03:01:18.697234 kernel: xor: automatically using best checksumming function avx Mar 6 03:01:18.879237 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 03:01:18.888395 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 03:01:18.890840 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 03:01:18.924260 systemd-udevd[439]: Using default interface naming scheme 'v255'. Mar 6 03:01:18.933251 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 03:01:18.937959 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 03:01:18.970931 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Mar 6 03:01:19.003836 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 03:01:19.009862 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 03:01:19.106590 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 03:01:19.112827 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 03:01:19.212230 kernel: cryptd: max_cpu_qlen set to 1000 Mar 6 03:01:19.224219 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 6 03:01:19.235119 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Mar 6 03:01:19.246240 kernel: scsi host0: Virtio SCSI HBA Mar 6 03:01:19.254254 kernel: blk-mq: reduced tag depth to 10240 Mar 6 03:01:19.311227 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Mar 6 03:01:19.321038 kernel: AES CTR mode by8 optimization enabled Mar 6 03:01:19.349453 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 03:01:19.352439 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:01:19.358645 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 03:01:19.363622 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 03:01:19.369594 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 03:01:19.385219 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Mar 6 03:01:19.385537 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Mar 6 03:01:19.388138 kernel: sd 0:0:1:0: [sda] Write Protect is off Mar 6 03:01:19.388541 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Mar 6 03:01:19.388786 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 6 03:01:19.403302 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 03:01:19.403360 kernel: GPT:17805311 != 33554431 Mar 6 03:01:19.403387 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 03:01:19.403418 kernel: GPT:17805311 != 33554431 Mar 6 03:01:19.403440 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 03:01:19.403463 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 6 03:01:19.405684 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Mar 6 03:01:19.406139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:01:19.494106 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Mar 6 03:01:19.494767 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 03:01:19.517719 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Mar 6 03:01:19.528380 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Mar 6 03:01:19.528627 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Mar 6 03:01:19.548364 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 6 03:01:19.548651 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 03:01:19.556297 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 03:01:19.560285 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 03:01:19.565688 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 03:01:19.580377 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 03:01:19.596865 disk-uuid[595]: Primary Header is updated. Mar 6 03:01:19.596865 disk-uuid[595]: Secondary Entries is updated. Mar 6 03:01:19.596865 disk-uuid[595]: Secondary Header is updated. Mar 6 03:01:19.608935 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 03:01:19.615265 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 6 03:01:19.653235 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 6 03:01:20.672223 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 6 03:01:20.672433 disk-uuid[596]: The operation has completed successfully. Mar 6 03:01:20.745099 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 03:01:20.745281 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 03:01:20.802829 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 03:01:20.830463 sh[617]: Success Mar 6 03:01:20.851576 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 03:01:20.851648 kernel: device-mapper: uevent: version 1.0.3 Mar 6 03:01:20.851689 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 6 03:01:20.865253 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Mar 6 03:01:20.942114 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 03:01:20.946469 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 03:01:20.967171 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 03:01:20.986237 kernel: BTRFS: device fsid 1235dd15-5252-4928-9c6c-372370c6bfca devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (629) Mar 6 03:01:20.989557 kernel: BTRFS info (device dm-0): first mount of filesystem 1235dd15-5252-4928-9c6c-372370c6bfca Mar 6 03:01:20.989622 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 6 03:01:21.013984 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 6 03:01:21.014059 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 6 03:01:21.014084 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 6 03:01:21.017383 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 03:01:21.021415 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 6 03:01:21.021878 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 03:01:21.024090 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 03:01:21.032780 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 03:01:21.075522 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (662) Mar 6 03:01:21.078256 kernel: BTRFS info (device sda6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 03:01:21.078313 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 03:01:21.089714 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 6 03:01:21.089776 kernel: BTRFS info (device sda6): turning on async discard Mar 6 03:01:21.089810 kernel: BTRFS info (device sda6): enabling free space tree Mar 6 03:01:21.097249 kernel: BTRFS info (device sda6): last unmount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 03:01:21.098307 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 03:01:21.105396 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 03:01:21.191138 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 03:01:21.199675 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 03:01:21.350666 systemd-networkd[798]: lo: Link UP Mar 6 03:01:21.350684 systemd-networkd[798]: lo: Gained carrier Mar 6 03:01:21.354157 systemd-networkd[798]: Enumeration completed Mar 6 03:01:21.355688 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 03:01:21.355695 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 03:01:21.359216 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 03:01:21.364157 ignition[726]: Ignition 2.22.0 Mar 6 03:01:21.359661 systemd-networkd[798]: eth0: Link UP Mar 6 03:01:21.364168 ignition[726]: Stage: fetch-offline Mar 6 03:01:21.359915 systemd-networkd[798]: eth0: Gained carrier Mar 6 03:01:21.364237 ignition[726]: no configs at "/usr/lib/ignition/base.d" Mar 6 03:01:21.359932 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 03:01:21.364253 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 6 03:01:21.361604 systemd[1]: Reached target network.target - Network. Mar 6 03:01:21.364411 ignition[726]: parsed url from cmdline: "" Mar 6 03:01:21.370634 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 03:01:21.364419 ignition[726]: no config URL provided Mar 6 03:01:21.372305 systemd-networkd[798]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011.c.flatcar-212911.internal' to 'ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011' Mar 6 03:01:21.364429 ignition[726]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 03:01:21.372318 systemd-networkd[798]: eth0: DHCPv4 address 10.128.0.102/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 6 03:01:21.364444 ignition[726]: no config at "/usr/lib/ignition/user.ign" Mar 6 03:01:21.379316 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 6 03:01:21.364455 ignition[726]: failed to fetch config: resource requires networking Mar 6 03:01:21.364720 ignition[726]: Ignition finished successfully Mar 6 03:01:21.423748 ignition[807]: Ignition 2.22.0 Mar 6 03:01:21.423759 ignition[807]: Stage: fetch Mar 6 03:01:21.423931 ignition[807]: no configs at "/usr/lib/ignition/base.d" Mar 6 03:01:21.435820 unknown[807]: fetched base config from "system" Mar 6 03:01:21.423942 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 6 03:01:21.435828 unknown[807]: fetched base config from "system" Mar 6 03:01:21.424035 ignition[807]: parsed url from cmdline: "" Mar 6 03:01:21.435854 unknown[807]: fetched user config from "gcp" Mar 6 03:01:21.424040 ignition[807]: no config URL provided Mar 6 03:01:21.443371 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 6 03:01:21.424046 ignition[807]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 03:01:21.446816 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 03:01:21.424055 ignition[807]: no config at "/usr/lib/ignition/user.ign" Mar 6 03:01:21.424088 ignition[807]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Mar 6 03:01:21.428040 ignition[807]: GET result: OK Mar 6 03:01:21.428403 ignition[807]: parsing config with SHA512: ee3313ad888b7d6765ef4294b59f1b34bb34904138ae8c6cf9b78e2b97afe5c365ab0a0e71c40d885d1ed71b697301e17818cd4bac838ae96015a573c33272ea Mar 6 03:01:21.440099 ignition[807]: fetch: fetch complete Mar 6 03:01:21.440113 ignition[807]: fetch: fetch passed Mar 6 03:01:21.440249 ignition[807]: Ignition finished successfully Mar 6 03:01:21.487062 ignition[813]: Ignition 2.22.0 Mar 6 03:01:21.487080 ignition[813]: Stage: kargs Mar 6 03:01:21.487339 ignition[813]: no configs at "/usr/lib/ignition/base.d" Mar 6 03:01:21.491027 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 03:01:21.487356 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 6 03:01:21.496636 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 03:01:21.488778 ignition[813]: kargs: kargs passed Mar 6 03:01:21.488833 ignition[813]: Ignition finished successfully Mar 6 03:01:21.533025 ignition[820]: Ignition 2.22.0 Mar 6 03:01:21.533043 ignition[820]: Stage: disks Mar 6 03:01:21.533276 ignition[820]: no configs at "/usr/lib/ignition/base.d" Mar 6 03:01:21.536269 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 03:01:21.533293 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 6 03:01:21.539614 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 03:01:21.534368 ignition[820]: disks: disks passed Mar 6 03:01:21.545467 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 03:01:21.534425 ignition[820]: Ignition finished successfully Mar 6 03:01:21.548519 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 03:01:21.552476 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 03:01:21.556461 systemd[1]: Reached target basic.target - Basic System. Mar 6 03:01:21.561966 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 03:01:21.603924 systemd-fsck[829]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Mar 6 03:01:21.612879 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 03:01:21.618328 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 03:01:21.784224 kernel: EXT4-fs (sda9): mounted filesystem 16ab7223-a8af-43d2-ad40-7e1bf0ff2a89 r/w with ordered data mode. Quota mode: none. Mar 6 03:01:21.785471 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 03:01:21.789439 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 03:01:21.793344 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 03:01:21.812132 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 03:01:21.819937 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 03:01:21.820440 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 03:01:21.820473 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 03:01:21.837647 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (837) Mar 6 03:01:21.837701 kernel: BTRFS info (device sda6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 03:01:21.837728 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 03:01:21.835244 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 03:01:21.846311 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 6 03:01:21.846339 kernel: BTRFS info (device sda6): turning on async discard Mar 6 03:01:21.846355 kernel: BTRFS info (device sda6): enabling free space tree Mar 6 03:01:21.844955 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 03:01:21.854092 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 03:01:21.974667 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 03:01:21.983882 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Mar 6 03:01:21.992259 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 03:01:21.998517 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 03:01:22.147882 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 03:01:22.149967 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 03:01:22.164732 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 03:01:22.177246 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 03:01:22.181378 kernel: BTRFS info (device sda6): last unmount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 03:01:22.223069 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 03:01:22.227025 ignition[952]: INFO : Ignition 2.22.0 Mar 6 03:01:22.227025 ignition[952]: INFO : Stage: mount Mar 6 03:01:22.236343 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 03:01:22.236343 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 6 03:01:22.236343 ignition[952]: INFO : mount: mount passed Mar 6 03:01:22.236343 ignition[952]: INFO : Ignition finished successfully Mar 6 03:01:22.231863 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 03:01:22.234664 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 03:01:22.262280 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 03:01:22.289239 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (964) Mar 6 03:01:22.292207 kernel: BTRFS info (device sda6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 03:01:22.292269 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 03:01:22.301044 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 6 03:01:22.301102 kernel: BTRFS info (device sda6): turning on async discard Mar 6 03:01:22.301128 kernel: BTRFS info (device sda6): enabling free space tree Mar 6 03:01:22.305104 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 03:01:22.343723 ignition[981]: INFO : Ignition 2.22.0 Mar 6 03:01:22.343723 ignition[981]: INFO : Stage: files Mar 6 03:01:22.351321 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 03:01:22.351321 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 6 03:01:22.351321 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Mar 6 03:01:22.351321 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 03:01:22.351321 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 03:01:22.368303 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 03:01:22.368303 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 03:01:22.368303 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 03:01:22.368303 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 03:01:22.368303 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 6 03:01:22.353612 unknown[981]: wrote ssh authorized keys file for user: core Mar 6 03:01:22.469072 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 6 03:01:22.654584 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 03:01:22.659354 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 6 03:01:22.659354 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 03:01:22.659354 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 03:01:22.659354 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 03:01:22.659354 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 03:01:22.659354 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 03:01:22.659354 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 03:01:22.659354 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 03:01:22.692306 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 03:01:22.692306 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 03:01:22.692306 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 03:01:22.692306 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 03:01:22.692306 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 03:01:22.692306 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 6 03:01:23.002490 systemd-networkd[798]: eth0: Gained IPv6LL Mar 6 03:01:23.302534 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 6 03:01:23.964588 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 03:01:23.964588 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 6 03:01:23.973342 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 03:01:23.973342 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 03:01:23.973342 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 6 03:01:23.973342 ignition[981]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 6 03:01:23.973342 ignition[981]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 03:01:23.973342 ignition[981]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 03:01:23.973342 ignition[981]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 03:01:23.973342 ignition[981]: INFO : files: files passed Mar 6 03:01:23.973342 ignition[981]: INFO : Ignition finished successfully Mar 6 03:01:23.972427 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 03:01:23.976925 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 03:01:23.985653 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 03:01:23.998474 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 03:01:24.034339 initrd-setup-root-after-ignition[1009]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 03:01:24.034339 initrd-setup-root-after-ignition[1009]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 03:01:23.998636 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 03:01:24.047333 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 03:01:24.018089 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 03:01:24.024963 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 03:01:24.032647 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 03:01:24.102891 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 03:01:24.103175 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 03:01:24.108934 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 03:01:24.111502 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 03:01:24.115559 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 03:01:24.117028 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 03:01:24.151576 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 03:01:24.159143 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 03:01:24.189599 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 03:01:24.192456 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 03:01:24.198548 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 03:01:24.201828 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 03:01:24.202037 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 03:01:24.211599 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 03:01:24.214749 systemd[1]: Stopped target basic.target - Basic System. Mar 6 03:01:24.218874 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 03:01:24.222819 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 03:01:24.226747 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 03:01:24.230635 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 6 03:01:24.234756 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 03:01:24.238754 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 03:01:24.242708 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 03:01:24.247819 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 03:01:24.251715 systemd[1]: Stopped target swap.target - Swaps. Mar 6 03:01:24.255700 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 03:01:24.255919 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 03:01:24.265336 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 03:01:24.265728 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 03:01:24.269551 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 03:01:24.269805 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 03:01:24.273733 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 03:01:24.274263 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 03:01:24.282550 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 03:01:24.282759 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 03:01:24.288508 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 03:01:24.288692 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 03:01:24.293537 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 03:01:24.300117 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 03:01:24.311825 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 03:01:24.313209 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 03:01:24.322347 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 03:01:24.322782 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 03:01:24.336014 ignition[1034]: INFO : Ignition 2.22.0 Mar 6 03:01:24.336014 ignition[1034]: INFO : Stage: umount Mar 6 03:01:24.340322 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 03:01:24.340322 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 6 03:01:24.340322 ignition[1034]: INFO : umount: umount passed Mar 6 03:01:24.340322 ignition[1034]: INFO : Ignition finished successfully Mar 6 03:01:24.341628 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 03:01:24.341787 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 03:01:24.352074 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 03:01:24.352795 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 03:01:24.352905 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 03:01:24.360181 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 03:01:24.360558 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 03:01:24.366471 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 03:01:24.366550 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 03:01:24.372351 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 6 03:01:24.372426 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 6 03:01:24.376346 systemd[1]: Stopped target network.target - Network. Mar 6 03:01:24.380302 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 03:01:24.380385 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 03:01:24.384372 systemd[1]: Stopped target paths.target - Path Units. Mar 6 03:01:24.388310 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 03:01:24.392414 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 03:01:24.394543 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 03:01:24.398491 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 03:01:24.402544 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 03:01:24.402598 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 03:01:24.406563 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 03:01:24.406720 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 03:01:24.411639 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 03:01:24.411833 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 03:01:24.415591 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 03:01:24.415774 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 03:01:24.419962 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 03:01:24.425972 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 03:01:24.426850 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 03:01:24.426986 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 03:01:24.441075 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 6 03:01:24.441535 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 03:01:24.441651 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 03:01:24.449483 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 6 03:01:24.449779 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 03:01:24.449893 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 03:01:24.452749 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 6 03:01:24.458346 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 03:01:24.458419 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 03:01:24.462324 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 03:01:24.462428 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 03:01:24.467444 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 03:01:24.475306 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 03:01:24.475499 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 03:01:24.484455 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 03:01:24.484549 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 03:01:24.490685 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 03:01:24.490760 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 03:01:24.493521 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 03:01:24.493705 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 03:01:24.498809 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 03:01:24.511492 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 6 03:01:24.511568 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 6 03:01:24.518633 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 03:01:24.519019 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 03:01:24.529024 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 03:01:24.529144 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 03:01:24.532409 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 03:01:24.532478 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 03:01:24.536368 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 03:01:24.536451 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 03:01:24.543319 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 03:01:24.543400 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 03:01:24.554337 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 03:01:24.554452 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 03:01:24.563859 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 03:01:24.568298 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 6 03:01:24.568383 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 03:01:24.571690 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 03:01:24.571860 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 03:01:24.580942 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 6 03:01:24.581157 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 03:01:24.591714 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 03:01:24.591882 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 03:01:24.594541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 03:01:24.594714 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:01:24.600868 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 6 03:01:24.600956 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 6 03:01:24.601019 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 6 03:01:24.601067 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 03:01:24.601603 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 03:01:24.601721 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 03:01:24.677302 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Mar 6 03:01:24.605706 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 03:01:24.605820 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 03:01:24.610795 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 03:01:24.614750 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 03:01:24.632879 systemd[1]: Switching root. Mar 6 03:01:24.691301 systemd-journald[192]: Journal stopped Mar 6 03:01:26.714993 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 03:01:26.715050 kernel: SELinux: policy capability open_perms=1 Mar 6 03:01:26.715080 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 03:01:26.715099 kernel: SELinux: policy capability always_check_network=0 Mar 6 03:01:26.715118 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 03:01:26.715137 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 03:01:26.715159 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 03:01:26.715179 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 03:01:26.715221 kernel: SELinux: policy capability userspace_initial_context=0 Mar 6 03:01:26.715243 kernel: audit: type=1403 audit(1772766085.301:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 03:01:26.715267 systemd[1]: Successfully loaded SELinux policy in 70.423ms. Mar 6 03:01:26.715290 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.917ms. Mar 6 03:01:26.715314 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 03:01:26.715335 systemd[1]: Detected virtualization google. Mar 6 03:01:26.715362 systemd[1]: Detected architecture x86-64. Mar 6 03:01:26.715383 systemd[1]: Detected first boot. Mar 6 03:01:26.715405 systemd[1]: Initializing machine ID from random generator. Mar 6 03:01:26.715424 zram_generator::config[1077]: No configuration found. Mar 6 03:01:26.715443 kernel: Guest personality initialized and is inactive Mar 6 03:01:26.715462 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 6 03:01:26.715485 kernel: Initialized host personality Mar 6 03:01:26.715505 kernel: NET: Registered PF_VSOCK protocol family Mar 6 03:01:26.715523 systemd[1]: Populated /etc with preset unit settings. Mar 6 03:01:26.715543 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 6 03:01:26.715563 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 6 03:01:26.715581 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 6 03:01:26.715603 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 6 03:01:26.715628 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 03:01:26.715649 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 03:01:26.715668 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 03:01:26.715689 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 03:01:26.715710 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 03:01:26.715732 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 03:01:26.715755 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 03:01:26.715783 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 03:01:26.715804 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 03:01:26.715826 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 03:01:26.715851 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 03:01:26.715874 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 03:01:26.715898 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 03:01:26.715928 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 03:01:26.715965 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 03:01:26.715991 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 03:01:26.716018 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 03:01:26.716042 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 6 03:01:26.716066 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 6 03:01:26.716090 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 6 03:01:26.716114 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 03:01:26.716139 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 03:01:26.716162 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 03:01:26.716191 systemd[1]: Reached target slices.target - Slice Units. Mar 6 03:01:26.716437 systemd[1]: Reached target swap.target - Swaps. Mar 6 03:01:26.716459 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 03:01:26.716480 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 03:01:26.716503 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 6 03:01:26.716526 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 03:01:26.716555 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 03:01:26.716577 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 03:01:26.716599 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 03:01:26.716620 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 03:01:26.716643 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 03:01:26.716664 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 03:01:26.716686 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:01:26.716712 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 03:01:26.716735 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 03:01:26.716757 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 03:01:26.716780 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 03:01:26.716802 systemd[1]: Reached target machines.target - Containers. Mar 6 03:01:26.716824 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 03:01:26.716846 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 03:01:26.716867 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 03:01:26.716893 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 03:01:26.716916 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 03:01:26.716939 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 03:01:26.716971 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 03:01:26.716992 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 03:01:26.717014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 03:01:26.717036 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 03:01:26.717057 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 6 03:01:26.717079 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 6 03:01:26.717106 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 6 03:01:26.717129 systemd[1]: Stopped systemd-fsck-usr.service. Mar 6 03:01:26.717152 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 03:01:26.717174 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 03:01:26.717250 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 03:01:26.717276 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 03:01:26.717301 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 03:01:26.717326 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 6 03:01:26.717353 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 03:01:26.717376 systemd[1]: verity-setup.service: Deactivated successfully. Mar 6 03:01:26.717398 systemd[1]: Stopped verity-setup.service. Mar 6 03:01:26.717420 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:01:26.717442 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 03:01:26.717465 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 03:01:26.717488 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 03:01:26.717510 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 03:01:26.717536 kernel: loop: module loaded Mar 6 03:01:26.717557 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 03:01:26.717579 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 03:01:26.717601 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 03:01:26.717622 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 03:01:26.717642 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 03:01:26.717663 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 03:01:26.717685 kernel: ACPI: bus type drm_connector registered Mar 6 03:01:26.717706 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 03:01:26.717734 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 03:01:26.717758 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 03:01:26.717781 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 03:01:26.717804 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 03:01:26.717827 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 03:01:26.717851 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 03:01:26.717873 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 03:01:26.717896 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 03:01:26.717917 kernel: fuse: init (API version 7.41) Mar 6 03:01:26.717953 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 03:01:26.717977 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 03:01:26.718000 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 03:01:26.718023 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 03:01:26.718056 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 03:01:26.718127 systemd-journald[1144]: Collecting audit messages is disabled. Mar 6 03:01:26.718180 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 03:01:26.718236 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 03:01:26.718262 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 03:01:26.718285 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 6 03:01:26.718314 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 03:01:26.718340 systemd-journald[1144]: Journal started Mar 6 03:01:26.718382 systemd-journald[1144]: Runtime Journal (/run/log/journal/ee2f7ebe443a476292945b670915aa24) is 8M, max 148.6M, 140.6M free. Mar 6 03:01:26.150914 systemd[1]: Queued start job for default target multi-user.target. Mar 6 03:01:26.176958 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 6 03:01:26.177696 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 6 03:01:26.725233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 03:01:26.730231 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 03:01:26.737613 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 03:01:26.741224 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 03:01:26.746225 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 03:01:26.751243 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 03:01:26.760238 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 03:01:26.769917 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 03:01:26.778361 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 03:01:26.788408 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 6 03:01:26.791687 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 03:01:26.796488 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 03:01:26.808488 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 03:01:26.851267 kernel: loop0: detected capacity change from 0 to 128560 Mar 6 03:01:26.856287 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 03:01:26.867394 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 03:01:26.879763 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 6 03:01:26.894642 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 03:01:26.918780 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 03:01:26.940088 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 03:01:26.978274 kernel: loop1: detected capacity change from 0 to 110984 Mar 6 03:01:27.005606 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 6 03:01:27.007785 systemd-journald[1144]: Time spent on flushing to /var/log/journal/ee2f7ebe443a476292945b670915aa24 is 79.116ms for 972 entries. Mar 6 03:01:27.007785 systemd-journald[1144]: System Journal (/var/log/journal/ee2f7ebe443a476292945b670915aa24) is 8M, max 584.8M, 576.8M free. Mar 6 03:01:27.112426 systemd-journald[1144]: Received client request to flush runtime journal. Mar 6 03:01:27.112507 kernel: loop2: detected capacity change from 0 to 50736 Mar 6 03:01:27.041063 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 03:01:27.042111 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Mar 6 03:01:27.042142 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Mar 6 03:01:27.055648 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 03:01:27.068015 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 03:01:27.119473 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 03:01:27.131240 kernel: loop3: detected capacity change from 0 to 228704 Mar 6 03:01:27.173694 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 03:01:27.180393 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 03:01:27.183578 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 03:01:27.261746 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Mar 6 03:01:27.261783 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Mar 6 03:01:27.270626 kernel: loop4: detected capacity change from 0 to 128560 Mar 6 03:01:27.270352 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 03:01:27.304288 kernel: loop5: detected capacity change from 0 to 110984 Mar 6 03:01:27.351286 kernel: loop6: detected capacity change from 0 to 50736 Mar 6 03:01:27.386261 kernel: loop7: detected capacity change from 0 to 228704 Mar 6 03:01:27.427744 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Mar 6 03:01:27.430745 (sd-merge)[1227]: Merged extensions into '/usr'. Mar 6 03:01:27.437668 systemd[1]: Reload requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 03:01:27.437810 systemd[1]: Reloading... Mar 6 03:01:27.621236 zram_generator::config[1250]: No configuration found. Mar 6 03:01:27.921262 ldconfig[1169]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 03:01:28.126880 systemd[1]: Reloading finished in 688 ms. Mar 6 03:01:28.144273 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 03:01:28.147985 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 03:01:28.165403 systemd[1]: Starting ensure-sysext.service... Mar 6 03:01:28.171373 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 03:01:28.211061 systemd[1]: Reload requested from client PID 1294 ('systemctl') (unit ensure-sysext.service)... Mar 6 03:01:28.211284 systemd[1]: Reloading... Mar 6 03:01:28.224567 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 6 03:01:28.225027 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 6 03:01:28.225679 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 03:01:28.226676 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 03:01:28.228352 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 03:01:28.228800 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Mar 6 03:01:28.228985 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Mar 6 03:01:28.237064 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 03:01:28.239393 systemd-tmpfiles[1295]: Skipping /boot Mar 6 03:01:28.272917 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 03:01:28.272939 systemd-tmpfiles[1295]: Skipping /boot Mar 6 03:01:28.324256 zram_generator::config[1322]: No configuration found. Mar 6 03:01:28.562440 systemd[1]: Reloading finished in 350 ms. Mar 6 03:01:28.585598 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 03:01:28.599857 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 03:01:28.612007 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 03:01:28.617589 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 03:01:28.626456 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 03:01:28.633302 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 03:01:28.640815 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 03:01:28.648325 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 03:01:28.660564 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:01:28.661738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 03:01:28.669305 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 03:01:28.680555 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 03:01:28.693880 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 03:01:28.696448 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 03:01:28.697265 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 03:01:28.697442 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:01:28.706931 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:01:28.708297 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 03:01:28.708564 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 03:01:28.708697 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 03:01:28.714837 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 03:01:28.718293 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:01:28.728718 systemd-udevd[1369]: Using default interface naming scheme 'v255'. Mar 6 03:01:28.731584 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:01:28.732041 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 03:01:28.736613 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 03:01:28.741794 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 6 03:01:28.744454 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 03:01:28.744821 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 03:01:28.745126 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 03:01:28.748447 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 03:01:28.772699 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 03:01:28.775727 systemd[1]: Finished ensure-sysext.service. Mar 6 03:01:28.778789 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 03:01:28.780415 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 03:01:28.783962 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 03:01:28.784382 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 03:01:28.787925 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 03:01:28.791143 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 03:01:28.796801 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 03:01:28.797748 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 03:01:28.817164 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 03:01:28.821874 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 03:01:28.822005 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 03:01:28.828296 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 03:01:28.832392 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 6 03:01:28.840324 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Mar 6 03:01:28.873732 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 03:01:28.882396 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 03:01:28.897733 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 03:01:28.900559 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 03:01:28.923275 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 03:01:28.926619 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 03:01:28.959176 augenrules[1437]: No rules Mar 6 03:01:28.960969 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 03:01:28.961613 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 03:01:29.015082 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Mar 6 03:01:29.100012 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Mar 6 03:01:29.100072 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Mar 6 03:01:29.160727 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 6 03:01:29.267222 kernel: mousedev: PS/2 mouse device common for all mice Mar 6 03:01:29.311679 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 6 03:01:29.319126 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 03:01:29.346235 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 6 03:01:29.395289 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 03:01:29.412217 kernel: ACPI: button: Power Button [PWRF] Mar 6 03:01:29.413418 systemd-networkd[1410]: lo: Link UP Mar 6 03:01:29.413794 systemd-networkd[1410]: lo: Gained carrier Mar 6 03:01:29.420133 systemd-networkd[1410]: Enumeration completed Mar 6 03:01:29.426957 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Mar 6 03:01:29.423912 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 03:01:29.424677 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 03:01:29.424685 systemd-networkd[1410]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 03:01:29.427633 systemd-networkd[1410]: eth0: Link UP Mar 6 03:01:29.427910 systemd-networkd[1410]: eth0: Gained carrier Mar 6 03:01:29.427938 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 03:01:29.436127 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 6 03:01:29.440325 systemd-networkd[1410]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011.c.flatcar-212911.internal' to 'ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011' Mar 6 03:01:29.440347 systemd-networkd[1410]: eth0: DHCPv4 address 10.128.0.102/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 6 03:01:29.450492 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 03:01:29.481437 systemd-resolved[1367]: Positive Trust Anchors: Mar 6 03:01:29.482436 systemd-resolved[1367]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 03:01:29.482511 systemd-resolved[1367]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 03:01:29.503253 kernel: ACPI: button: Sleep Button [SLPF] Mar 6 03:01:29.505538 systemd-resolved[1367]: Defaulting to hostname 'linux'. Mar 6 03:01:29.509723 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 03:01:29.518451 systemd[1]: Reached target network.target - Network. Mar 6 03:01:29.526341 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 03:01:29.536356 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 03:01:29.545479 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 03:01:29.555423 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 03:01:29.565331 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 6 03:01:29.585220 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 6 03:01:29.590635 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 03:01:29.599515 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 03:01:29.610411 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 03:01:29.620322 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 03:01:29.620378 systemd[1]: Reached target paths.target - Path Units. Mar 6 03:01:29.629360 systemd[1]: Reached target timers.target - Timer Units. Mar 6 03:01:29.642209 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 03:01:29.653400 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 03:01:29.666884 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 6 03:01:29.678542 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 6 03:01:29.686286 kernel: EDAC MC: Ver: 3.0.0 Mar 6 03:01:29.692333 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 6 03:01:29.709776 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 03:01:29.718880 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 6 03:01:29.732296 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 6 03:01:29.742766 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 03:01:29.777675 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 03:01:29.786466 systemd[1]: Reached target basic.target - Basic System. Mar 6 03:01:29.794480 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 03:01:29.794670 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 03:01:29.796382 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 03:01:29.819703 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 6 03:01:29.840376 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 03:01:29.870819 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 03:01:29.887481 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 03:01:29.898985 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 03:01:29.907452 coreos-metadata[1495]: Mar 06 03:01:29.907 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Mar 6 03:01:29.908331 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 03:01:29.910508 coreos-metadata[1495]: Mar 06 03:01:29.910 INFO Fetch successful Mar 6 03:01:29.910508 coreos-metadata[1495]: Mar 06 03:01:29.910 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Mar 6 03:01:29.910508 coreos-metadata[1495]: Mar 06 03:01:29.910 INFO Fetch successful Mar 6 03:01:29.910508 coreos-metadata[1495]: Mar 06 03:01:29.910 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Mar 6 03:01:29.911820 jq[1500]: false Mar 6 03:01:29.912209 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 6 03:01:29.914894 coreos-metadata[1495]: Mar 06 03:01:29.914 INFO Fetch successful Mar 6 03:01:29.915187 coreos-metadata[1495]: Mar 06 03:01:29.915 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Mar 6 03:01:29.915187 coreos-metadata[1495]: Mar 06 03:01:29.915 INFO Fetch successful Mar 6 03:01:29.924514 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 03:01:29.936157 systemd[1]: Started ntpd.service - Network Time Service. Mar 6 03:01:29.949229 google_oslogin_nss_cache[1502]: oslogin_cache_refresh[1502]: Refreshing passwd entry cache Mar 6 03:01:29.948563 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 03:01:29.947382 oslogin_cache_refresh[1502]: Refreshing passwd entry cache Mar 6 03:01:29.959951 google_oslogin_nss_cache[1502]: oslogin_cache_refresh[1502]: Failure getting users, quitting Mar 6 03:01:29.959951 google_oslogin_nss_cache[1502]: oslogin_cache_refresh[1502]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 6 03:01:29.959951 google_oslogin_nss_cache[1502]: oslogin_cache_refresh[1502]: Refreshing group entry cache Mar 6 03:01:29.956485 oslogin_cache_refresh[1502]: Failure getting users, quitting Mar 6 03:01:29.956510 oslogin_cache_refresh[1502]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 6 03:01:29.956583 oslogin_cache_refresh[1502]: Refreshing group entry cache Mar 6 03:01:29.962428 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 03:01:29.965635 google_oslogin_nss_cache[1502]: oslogin_cache_refresh[1502]: Failure getting groups, quitting Mar 6 03:01:29.965635 google_oslogin_nss_cache[1502]: oslogin_cache_refresh[1502]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 6 03:01:29.964414 oslogin_cache_refresh[1502]: Failure getting groups, quitting Mar 6 03:01:29.964433 oslogin_cache_refresh[1502]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 6 03:01:29.975887 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 03:01:29.980612 extend-filesystems[1501]: Found /dev/sda6 Mar 6 03:01:29.992632 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 03:01:29.998776 extend-filesystems[1501]: Found /dev/sda9 Mar 6 03:01:30.012244 extend-filesystems[1501]: Checking size of /dev/sda9 Mar 6 03:01:30.005423 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 03:01:30.030928 extend-filesystems[1501]: Resized partition /dev/sda9 Mar 6 03:01:30.022787 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Mar 6 03:01:30.024470 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 03:01:30.032139 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 03:01:30.045936 ntpd[1504]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:56:02 UTC 2026 (1): Starting Mar 6 03:01:30.050564 extend-filesystems[1528]: resize2fs 1.47.3 (8-Jul-2025) Mar 6 03:01:30.058346 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:56:02 UTC 2026 (1): Starting Mar 6 03:01:30.058346 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 03:01:30.058346 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: ---------------------------------------------------- Mar 6 03:01:30.058346 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: ntp-4 is maintained by Network Time Foundation, Mar 6 03:01:30.058346 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 03:01:30.058346 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: corporation. Support and training for ntp-4 are Mar 6 03:01:30.058346 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: available at https://www.nwtime.org/support Mar 6 03:01:30.058346 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: ---------------------------------------------------- Mar 6 03:01:30.046024 ntpd[1504]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 03:01:30.052743 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 03:01:30.046041 ntpd[1504]: ---------------------------------------------------- Mar 6 03:01:30.046054 ntpd[1504]: ntp-4 is maintained by Network Time Foundation, Mar 6 03:01:30.046067 ntpd[1504]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 03:01:30.046080 ntpd[1504]: corporation. Support and training for ntp-4 are Mar 6 03:01:30.046093 ntpd[1504]: available at https://www.nwtime.org/support Mar 6 03:01:30.046105 ntpd[1504]: ---------------------------------------------------- Mar 6 03:01:30.080283 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Mar 6 03:01:30.082746 ntpd[1504]: proto: precision = 0.096 usec (-23) Mar 6 03:01:30.087504 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: proto: precision = 0.096 usec (-23) Mar 6 03:01:30.130892 kernel: ntpd[1504]: segfault at 24 ip 000055fc1d006aeb sp 00007fffb36b82d0 error 4 in ntpd[68aeb,55fc1cfa4000+80000] likely on CPU 0 (core 0, socket 0) Mar 6 03:01:30.130983 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Mar 6 03:01:30.131087 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: basedate set to 2026-02-21 Mar 6 03:01:30.131087 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: gps base set to 2026-02-22 (week 2407) Mar 6 03:01:30.131087 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 03:01:30.131087 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 03:01:30.131087 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 03:01:30.131087 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: Listen normally on 3 eth0 10.128.0.102:123 Mar 6 03:01:30.131087 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: Listen normally on 4 lo [::1]:123 Mar 6 03:01:30.131087 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: bind(21) AF_INET6 [fe80::4001:aff:fe80:66%2]:123 flags 0x811 failed: Cannot assign requested address Mar 6 03:01:30.131087 ntpd[1504]: 6 Mar 03:01:30 ntpd[1504]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:66%2]:123 Mar 6 03:01:30.097097 ntpd[1504]: basedate set to 2026-02-21 Mar 6 03:01:30.097125 ntpd[1504]: gps base set to 2026-02-22 (week 2407) Mar 6 03:01:30.097316 ntpd[1504]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 03:01:30.097361 ntpd[1504]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 03:01:30.097621 ntpd[1504]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 03:01:30.097666 ntpd[1504]: Listen normally on 3 eth0 10.128.0.102:123 Mar 6 03:01:30.097720 ntpd[1504]: Listen normally on 4 lo [::1]:123 Mar 6 03:01:30.097765 ntpd[1504]: bind(21) AF_INET6 [fe80::4001:aff:fe80:66%2]:123 flags 0x811 failed: Cannot assign requested address Mar 6 03:01:30.097797 ntpd[1504]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:66%2]:123 Mar 6 03:01:30.151040 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 03:01:30.151890 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 03:01:30.153265 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 03:01:30.154034 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 6 03:01:30.155625 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 6 03:01:30.157573 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 03:01:30.157921 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 03:01:30.168390 jq[1531]: true Mar 6 03:01:30.173692 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 03:01:30.179680 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 03:01:30.193224 update_engine[1526]: I20260306 03:01:30.186062 1526 main.cc:92] Flatcar Update Engine starting Mar 6 03:01:30.203633 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Mar 6 03:01:30.201735 systemd-coredump[1543]: Process 1504 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 6 03:01:30.213230 extend-filesystems[1528]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 6 03:01:30.213230 extend-filesystems[1528]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 6 03:01:30.213230 extend-filesystems[1528]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Mar 6 03:01:30.243700 extend-filesystems[1501]: Resized filesystem in /dev/sda9 Mar 6 03:01:30.245371 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 03:01:30.247564 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 03:01:30.274248 (ntainerd)[1544]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 03:01:30.296268 jq[1542]: true Mar 6 03:01:30.295922 systemd-logind[1518]: Watching system buttons on /dev/input/event2 (Power Button) Mar 6 03:01:30.295954 systemd-logind[1518]: Watching system buttons on /dev/input/event3 (Sleep Button) Mar 6 03:01:30.295987 systemd-logind[1518]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 6 03:01:30.297094 systemd-logind[1518]: New seat seat0. Mar 6 03:01:30.418081 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 03:01:30.428167 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 03:01:30.441225 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 6 03:01:30.515466 dbus-daemon[1496]: [system] SELinux support is enabled Mar 6 03:01:30.516923 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 03:01:30.528490 dbus-daemon[1496]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1410 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 6 03:01:30.531471 update_engine[1526]: I20260306 03:01:30.531262 1526 update_check_scheduler.cc:74] Next update check in 9m21s Mar 6 03:01:30.538062 tar[1538]: linux-amd64/LICENSE Mar 6 03:01:30.538551 tar[1538]: linux-amd64/helm Mar 6 03:01:30.544678 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Mar 6 03:01:30.549391 dbus-daemon[1496]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 6 03:01:30.556979 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 03:01:30.557118 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 03:01:30.557165 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 03:01:30.567274 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 03:01:30.567314 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 03:01:30.586914 systemd[1]: Started systemd-coredump@0-1543-0.service - Process Core Dump (PID 1543/UID 0). Mar 6 03:01:30.597412 systemd[1]: Started update-engine.service - Update Engine. Mar 6 03:01:30.604380 bash[1583]: Updated "/home/core/.ssh/authorized_keys" Mar 6 03:01:30.606858 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 03:01:30.631650 systemd[1]: Starting sshkeys.service... Mar 6 03:01:30.641641 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 6 03:01:30.652621 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 03:01:30.689269 sshd_keygen[1541]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 03:01:30.702451 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 6 03:01:30.716657 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 6 03:01:30.756119 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 03:01:30.780380 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 03:01:30.852834 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 03:01:30.854274 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 03:01:30.875249 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 03:01:30.941618 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 03:01:30.955178 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 03:01:30.968483 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 03:01:30.978781 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 03:01:31.016822 coreos-metadata[1594]: Mar 06 03:01:31.016 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Mar 6 03:01:31.019995 coreos-metadata[1594]: Mar 06 03:01:31.019 INFO Fetch failed with 404: resource not found Mar 6 03:01:31.019995 coreos-metadata[1594]: Mar 06 03:01:31.019 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Mar 6 03:01:31.023476 coreos-metadata[1594]: Mar 06 03:01:31.022 INFO Fetch successful Mar 6 03:01:31.023476 coreos-metadata[1594]: Mar 06 03:01:31.022 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Mar 6 03:01:31.024232 coreos-metadata[1594]: Mar 06 03:01:31.023 INFO Fetch failed with 404: resource not found Mar 6 03:01:31.024232 coreos-metadata[1594]: Mar 06 03:01:31.023 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Mar 6 03:01:31.024606 locksmithd[1588]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 03:01:31.026113 coreos-metadata[1594]: Mar 06 03:01:31.025 INFO Fetch failed with 404: resource not found Mar 6 03:01:31.026113 coreos-metadata[1594]: Mar 06 03:01:31.025 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Mar 6 03:01:31.028090 containerd[1544]: time="2026-03-06T03:01:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 6 03:01:31.031900 coreos-metadata[1594]: Mar 06 03:01:31.029 INFO Fetch successful Mar 6 03:01:31.034843 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 6 03:01:31.035976 dbus-daemon[1496]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 6 03:01:31.036982 containerd[1544]: time="2026-03-06T03:01:31.035312245Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 6 03:01:31.036850 dbus-daemon[1496]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1587 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 6 03:01:31.038340 unknown[1594]: wrote ssh authorized keys file for user: core Mar 6 03:01:31.065630 systemd[1]: Starting polkit.service - Authorization Manager... Mar 6 03:01:31.068595 systemd-networkd[1410]: eth0: Gained IPv6LL Mar 6 03:01:31.080003 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 03:01:31.088746 update-ssh-keys[1613]: Updated "/home/core/.ssh/authorized_keys" Mar 6 03:01:31.091996 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 6 03:01:31.106305 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 03:01:31.112290 containerd[1544]: time="2026-03-06T03:01:31.112243773Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.404µs" Mar 6 03:01:31.112513 containerd[1544]: time="2026-03-06T03:01:31.112470318Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 6 03:01:31.112628 containerd[1544]: time="2026-03-06T03:01:31.112607204Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 6 03:01:31.113143 containerd[1544]: time="2026-03-06T03:01:31.113111848Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 6 03:01:31.113310 containerd[1544]: time="2026-03-06T03:01:31.113284165Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 6 03:01:31.113660 containerd[1544]: time="2026-03-06T03:01:31.113630916Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 03:01:31.116876 containerd[1544]: time="2026-03-06T03:01:31.114325077Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 03:01:31.116876 containerd[1544]: time="2026-03-06T03:01:31.114355132Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 03:01:31.116876 containerd[1544]: time="2026-03-06T03:01:31.114702188Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 03:01:31.116876 containerd[1544]: time="2026-03-06T03:01:31.114730247Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 03:01:31.116876 containerd[1544]: time="2026-03-06T03:01:31.114750523Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 03:01:31.116876 containerd[1544]: time="2026-03-06T03:01:31.114766648Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 6 03:01:31.116876 containerd[1544]: time="2026-03-06T03:01:31.114880833Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 6 03:01:31.122351 containerd[1544]: time="2026-03-06T03:01:31.120550278Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 03:01:31.122351 containerd[1544]: time="2026-03-06T03:01:31.120615152Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 03:01:31.122351 containerd[1544]: time="2026-03-06T03:01:31.120635155Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 6 03:01:31.122351 containerd[1544]: time="2026-03-06T03:01:31.121489522Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 6 03:01:31.121804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:01:31.123894 containerd[1544]: time="2026-03-06T03:01:31.123860063Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 6 03:01:31.124226 containerd[1544]: time="2026-03-06T03:01:31.124084425Z" level=info msg="metadata content store policy set" policy=shared Mar 6 03:01:31.129840 systemd-coredump[1584]: Process 1504 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1504: #0 0x000055fc1d006aeb n/a (ntpd + 0x68aeb) #1 0x000055fc1cfafcdf n/a (ntpd + 0x11cdf) #2 0x000055fc1cfb0575 n/a (ntpd + 0x12575) #3 0x000055fc1cfabd8a n/a (ntpd + 0xdd8a) #4 0x000055fc1cfad5d3 n/a (ntpd + 0xf5d3) #5 0x000055fc1cfb5fd1 n/a (ntpd + 0x17fd1) #6 0x000055fc1cfa6c2d n/a (ntpd + 0x8c2d) #7 0x00007f669dcb116c n/a (libc.so.6 + 0x2716c) #8 0x00007f669dcb1229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055fc1cfa6c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Mar 6 03:01:31.135578 containerd[1544]: time="2026-03-06T03:01:31.130431250Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 6 03:01:31.135578 containerd[1544]: time="2026-03-06T03:01:31.130494426Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 6 03:01:31.135578 containerd[1544]: time="2026-03-06T03:01:31.130519346Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 6 03:01:31.135578 containerd[1544]: time="2026-03-06T03:01:31.130539908Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 6 03:01:31.135578 containerd[1544]: time="2026-03-06T03:01:31.130570430Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 6 03:01:31.135578 containerd[1544]: time="2026-03-06T03:01:31.130590166Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 6 03:01:31.135578 containerd[1544]: time="2026-03-06T03:01:31.130611103Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 6 03:01:31.135578 containerd[1544]: time="2026-03-06T03:01:31.130629194Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 6 03:01:31.135578 containerd[1544]: time="2026-03-06T03:01:31.130645730Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 6 03:01:31.135578 containerd[1544]: time="2026-03-06T03:01:31.130682701Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 6 03:01:31.135578 containerd[1544]: time="2026-03-06T03:01:31.130701023Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 6 03:01:31.135578 containerd[1544]: time="2026-03-06T03:01:31.130720803Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 6 03:01:31.135578 containerd[1544]: time="2026-03-06T03:01:31.130877486Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 6 03:01:31.135578 containerd[1544]: time="2026-03-06T03:01:31.130908545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 6 03:01:31.136946 containerd[1544]: time="2026-03-06T03:01:31.130932849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 6 03:01:31.136946 containerd[1544]: time="2026-03-06T03:01:31.130954939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 6 03:01:31.136946 containerd[1544]: time="2026-03-06T03:01:31.130976185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 6 03:01:31.136946 containerd[1544]: time="2026-03-06T03:01:31.130993881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 6 03:01:31.136946 containerd[1544]: time="2026-03-06T03:01:31.131012396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 6 03:01:31.136946 containerd[1544]: time="2026-03-06T03:01:31.131029682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 6 03:01:31.136946 containerd[1544]: time="2026-03-06T03:01:31.131048706Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 6 03:01:31.136946 containerd[1544]: time="2026-03-06T03:01:31.131066693Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 6 03:01:31.136946 containerd[1544]: time="2026-03-06T03:01:31.131083582Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 6 03:01:31.136946 containerd[1544]: time="2026-03-06T03:01:31.131154537Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 6 03:01:31.136946 containerd[1544]: time="2026-03-06T03:01:31.131174917Z" level=info msg="Start snapshots syncer" Mar 6 03:01:31.136946 containerd[1544]: time="2026-03-06T03:01:31.131227041Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 6 03:01:31.137670 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 03:01:31.141187 containerd[1544]: time="2026-03-06T03:01:31.131602679Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 6 03:01:31.141187 containerd[1544]: time="2026-03-06T03:01:31.131690888Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 6 03:01:31.142824 containerd[1544]: time="2026-03-06T03:01:31.131757455Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 6 03:01:31.142824 containerd[1544]: time="2026-03-06T03:01:31.131898156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 6 03:01:31.142824 containerd[1544]: time="2026-03-06T03:01:31.131928609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 6 03:01:31.142824 containerd[1544]: time="2026-03-06T03:01:31.131946820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 6 03:01:31.142824 containerd[1544]: time="2026-03-06T03:01:31.131965185Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 6 03:01:31.142824 containerd[1544]: time="2026-03-06T03:01:31.131986802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 6 03:01:31.142824 containerd[1544]: time="2026-03-06T03:01:31.132004596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 6 03:01:31.142824 containerd[1544]: time="2026-03-06T03:01:31.132023516Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 6 03:01:31.142824 containerd[1544]: time="2026-03-06T03:01:31.132059465Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 6 03:01:31.142824 containerd[1544]: time="2026-03-06T03:01:31.132077725Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 6 03:01:31.142824 containerd[1544]: time="2026-03-06T03:01:31.132097827Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 6 03:01:31.142824 containerd[1544]: time="2026-03-06T03:01:31.132166806Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 03:01:31.142824 containerd[1544]: time="2026-03-06T03:01:31.132191330Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 03:01:31.142824 containerd[1544]: time="2026-03-06T03:01:31.140338953Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 03:01:31.143902 containerd[1544]: time="2026-03-06T03:01:31.140377754Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 03:01:31.143902 containerd[1544]: time="2026-03-06T03:01:31.140394876Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 6 03:01:31.143902 containerd[1544]: time="2026-03-06T03:01:31.140415567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 6 03:01:31.143902 containerd[1544]: time="2026-03-06T03:01:31.140443617Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 6 03:01:31.143902 containerd[1544]: time="2026-03-06T03:01:31.140469371Z" level=info msg="runtime interface created" Mar 6 03:01:31.143902 containerd[1544]: time="2026-03-06T03:01:31.140478370Z" level=info msg="created NRI interface" Mar 6 03:01:31.143902 containerd[1544]: time="2026-03-06T03:01:31.140492772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 6 03:01:31.143902 containerd[1544]: time="2026-03-06T03:01:31.140514532Z" level=info msg="Connect containerd service" Mar 6 03:01:31.143902 containerd[1544]: time="2026-03-06T03:01:31.140553236Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 03:01:31.147353 containerd[1544]: time="2026-03-06T03:01:31.147165283Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 03:01:31.150110 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Mar 6 03:01:31.185849 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 6 03:01:31.188176 init.sh[1623]: + '[' -e /etc/default/instance_configs.cfg.template ']' Mar 6 03:01:31.188176 init.sh[1623]: + echo -e '[InstanceSetup]\nset_host_keys = false' Mar 6 03:01:31.186089 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 6 03:01:31.192547 init.sh[1623]: + /usr/bin/google_instance_setup Mar 6 03:01:31.191863 systemd[1]: systemd-coredump@0-1543-0.service: Deactivated successfully. Mar 6 03:01:31.202455 systemd[1]: Finished sshkeys.service. Mar 6 03:01:31.300599 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Mar 6 03:01:31.302231 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 03:01:31.318094 systemd[1]: Started ntpd.service - Network Time Service. Mar 6 03:01:31.453693 ntpd[1650]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:56:02 UTC 2026 (1): Starting Mar 6 03:01:31.453799 ntpd[1650]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 03:01:31.454260 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:56:02 UTC 2026 (1): Starting Mar 6 03:01:31.454260 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 03:01:31.454260 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: ---------------------------------------------------- Mar 6 03:01:31.454260 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: ntp-4 is maintained by Network Time Foundation, Mar 6 03:01:31.454260 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 03:01:31.454260 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: corporation. Support and training for ntp-4 are Mar 6 03:01:31.454260 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: available at https://www.nwtime.org/support Mar 6 03:01:31.454260 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: ---------------------------------------------------- Mar 6 03:01:31.453815 ntpd[1650]: ---------------------------------------------------- Mar 6 03:01:31.453828 ntpd[1650]: ntp-4 is maintained by Network Time Foundation, Mar 6 03:01:31.455087 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: proto: precision = 0.100 usec (-23) Mar 6 03:01:31.453842 ntpd[1650]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 03:01:31.453855 ntpd[1650]: corporation. Support and training for ntp-4 are Mar 6 03:01:31.456346 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: basedate set to 2026-02-21 Mar 6 03:01:31.453868 ntpd[1650]: available at https://www.nwtime.org/support Mar 6 03:01:31.453881 ntpd[1650]: ---------------------------------------------------- Mar 6 03:01:31.454862 ntpd[1650]: proto: precision = 0.100 usec (-23) Mar 6 03:01:31.455176 ntpd[1650]: basedate set to 2026-02-21 Mar 6 03:01:31.458592 ntpd[1650]: gps base set to 2026-02-22 (week 2407) Mar 6 03:01:31.458693 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: gps base set to 2026-02-22 (week 2407) Mar 6 03:01:31.458745 ntpd[1650]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 03:01:31.458799 ntpd[1650]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 03:01:31.458862 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 03:01:31.458862 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 03:01:31.459061 ntpd[1650]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 03:01:31.459114 ntpd[1650]: Listen normally on 3 eth0 10.128.0.102:123 Mar 6 03:01:31.459782 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 03:01:31.459782 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: Listen normally on 3 eth0 10.128.0.102:123 Mar 6 03:01:31.459782 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: Listen normally on 4 lo [::1]:123 Mar 6 03:01:31.459782 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:66%2]:123 Mar 6 03:01:31.459782 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: Listening on routing socket on fd #22 for interface updates Mar 6 03:01:31.459158 ntpd[1650]: Listen normally on 4 lo [::1]:123 Mar 6 03:01:31.459221 ntpd[1650]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:66%2]:123 Mar 6 03:01:31.459261 ntpd[1650]: Listening on routing socket on fd #22 for interface updates Mar 6 03:01:31.469497 containerd[1544]: time="2026-03-06T03:01:31.468858009Z" level=info msg="Start subscribing containerd event" Mar 6 03:01:31.470410 containerd[1544]: time="2026-03-06T03:01:31.470036133Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 03:01:31.470644 containerd[1544]: time="2026-03-06T03:01:31.470619924Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 03:01:31.472166 containerd[1544]: time="2026-03-06T03:01:31.472110240Z" level=info msg="Start recovering state" Mar 6 03:01:31.472208 ntpd[1650]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 03:01:31.472335 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 03:01:31.472335 ntpd[1650]: 6 Mar 03:01:31 ntpd[1650]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 03:01:31.472248 ntpd[1650]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 03:01:31.474213 containerd[1544]: time="2026-03-06T03:01:31.474168951Z" level=info msg="Start event monitor" Mar 6 03:01:31.474347 containerd[1544]: time="2026-03-06T03:01:31.474327873Z" level=info msg="Start cni network conf syncer for default" Mar 6 03:01:31.474572 containerd[1544]: time="2026-03-06T03:01:31.474554020Z" level=info msg="Start streaming server" Mar 6 03:01:31.474679 containerd[1544]: time="2026-03-06T03:01:31.474662560Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 6 03:01:31.474851 containerd[1544]: time="2026-03-06T03:01:31.474744226Z" level=info msg="runtime interface starting up..." Mar 6 03:01:31.474851 containerd[1544]: time="2026-03-06T03:01:31.474758714Z" level=info msg="starting plugins..." Mar 6 03:01:31.474851 containerd[1544]: time="2026-03-06T03:01:31.474783358Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 6 03:01:31.477419 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 03:01:31.478021 containerd[1544]: time="2026-03-06T03:01:31.477774512Z" level=info msg="containerd successfully booted in 0.455626s" Mar 6 03:01:31.565999 polkitd[1615]: Started polkitd version 126 Mar 6 03:01:31.582095 polkitd[1615]: Loading rules from directory /etc/polkit-1/rules.d Mar 6 03:01:31.582732 polkitd[1615]: Loading rules from directory /run/polkit-1/rules.d Mar 6 03:01:31.582808 polkitd[1615]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 6 03:01:31.586471 polkitd[1615]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 6 03:01:31.586529 polkitd[1615]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 6 03:01:31.586586 polkitd[1615]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 6 03:01:31.589171 polkitd[1615]: Finished loading, compiling and executing 2 rules Mar 6 03:01:31.591859 systemd[1]: Started polkit.service - Authorization Manager. Mar 6 03:01:31.594654 dbus-daemon[1496]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 6 03:01:31.595324 polkitd[1615]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 6 03:01:31.635486 systemd-hostnamed[1587]: Hostname set to (transient) Mar 6 03:01:31.637518 systemd-resolved[1367]: System hostname changed to 'ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011'. Mar 6 03:01:31.883981 tar[1538]: linux-amd64/README.md Mar 6 03:01:31.910765 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 03:01:32.114755 instance-setup[1629]: INFO Running google_set_multiqueue. Mar 6 03:01:32.133324 instance-setup[1629]: INFO Set channels for eth0 to 2. Mar 6 03:01:32.138527 instance-setup[1629]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Mar 6 03:01:32.141122 instance-setup[1629]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Mar 6 03:01:32.141222 instance-setup[1629]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Mar 6 03:01:32.142813 instance-setup[1629]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Mar 6 03:01:32.143311 instance-setup[1629]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Mar 6 03:01:32.145623 instance-setup[1629]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Mar 6 03:01:32.145692 instance-setup[1629]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Mar 6 03:01:32.147190 instance-setup[1629]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Mar 6 03:01:32.155831 instance-setup[1629]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 6 03:01:32.160501 instance-setup[1629]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 6 03:01:32.163111 instance-setup[1629]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Mar 6 03:01:32.163189 instance-setup[1629]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Mar 6 03:01:32.184352 init.sh[1623]: + /usr/bin/google_metadata_script_runner --script-type startup Mar 6 03:01:32.346182 startup-script[1699]: INFO Starting startup scripts. Mar 6 03:01:32.352040 startup-script[1699]: INFO No startup scripts found in metadata. Mar 6 03:01:32.352119 startup-script[1699]: INFO Finished running startup scripts. Mar 6 03:01:32.374212 init.sh[1623]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Mar 6 03:01:32.374212 init.sh[1623]: + daemon_pids=() Mar 6 03:01:32.374212 init.sh[1623]: + for d in accounts clock_skew network Mar 6 03:01:32.376310 init.sh[1623]: + daemon_pids+=($!) Mar 6 03:01:32.376310 init.sh[1623]: + for d in accounts clock_skew network Mar 6 03:01:32.376310 init.sh[1623]: + daemon_pids+=($!) Mar 6 03:01:32.376310 init.sh[1623]: + for d in accounts clock_skew network Mar 6 03:01:32.376310 init.sh[1623]: + daemon_pids+=($!) Mar 6 03:01:32.376584 init.sh[1702]: + /usr/bin/google_accounts_daemon Mar 6 03:01:32.378235 init.sh[1703]: + /usr/bin/google_clock_skew_daemon Mar 6 03:01:32.378611 init.sh[1704]: + /usr/bin/google_network_daemon Mar 6 03:01:32.380007 init.sh[1623]: + NOTIFY_SOCKET=/run/systemd/notify Mar 6 03:01:32.380007 init.sh[1623]: + /usr/bin/systemd-notify --ready Mar 6 03:01:32.393064 systemd[1]: Started oem-gce.service - GCE Linux Agent. Mar 6 03:01:32.404555 init.sh[1623]: + wait -n 1702 1703 1704 Mar 6 03:01:32.499438 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 03:01:32.511188 systemd[1]: Started sshd@0-10.128.0.102:22-20.161.92.111:53130.service - OpenSSH per-connection server daemon (20.161.92.111:53130). Mar 6 03:01:32.677611 google-networking[1704]: INFO Starting Google Networking daemon. Mar 6 03:01:32.828499 google-clock-skew[1703]: INFO Starting Google Clock Skew daemon. Mar 6 03:01:32.834858 google-clock-skew[1703]: INFO Clock drift token has changed: 0. Mar 6 03:01:32.874941 sshd[1707]: Accepted publickey for core from 20.161.92.111 port 53130 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:01:32.877730 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:32.891975 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 03:01:32.903061 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 03:01:32.933271 systemd-logind[1518]: New session 1 of user core. Mar 6 03:01:32.942834 groupadd[1718]: group added to /etc/group: name=google-sudoers, GID=1000 Mar 6 03:01:32.951322 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 03:01:32.953711 groupadd[1718]: group added to /etc/gshadow: name=google-sudoers Mar 6 03:01:32.969498 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 03:01:33.004847 (systemd)[1724]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 03:01:33.011451 groupadd[1718]: new group: name=google-sudoers, GID=1000 Mar 6 03:01:33.013684 systemd-logind[1518]: New session c1 of user core. Mar 6 03:01:33.058024 google-accounts[1702]: INFO Starting Google Accounts daemon. Mar 6 03:01:33.075105 google-accounts[1702]: WARNING OS Login not installed. Mar 6 03:01:33.077375 google-accounts[1702]: INFO Creating a new user account for 0. Mar 6 03:01:33.087269 init.sh[1735]: useradd: invalid user name '0': use --badname to ignore Mar 6 03:01:33.087605 google-accounts[1702]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Mar 6 03:01:33.123413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:01:33.133977 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 03:01:33.150115 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 03:01:33.280278 systemd[1724]: Queued start job for default target default.target. Mar 6 03:01:33.288141 systemd[1724]: Created slice app.slice - User Application Slice. Mar 6 03:01:33.288190 systemd[1724]: Reached target paths.target - Paths. Mar 6 03:01:33.288622 systemd[1724]: Reached target timers.target - Timers. Mar 6 03:01:33.292327 systemd[1724]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 03:01:33.318728 systemd[1724]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 03:01:33.320269 systemd[1724]: Reached target sockets.target - Sockets. Mar 6 03:01:33.320522 systemd[1724]: Reached target basic.target - Basic System. Mar 6 03:01:33.320737 systemd[1724]: Reached target default.target - Main User Target. Mar 6 03:01:33.320785 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 03:01:33.320799 systemd[1724]: Startup finished in 294ms. Mar 6 03:01:33.336427 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 03:01:33.346067 systemd[1]: Startup finished in 3.952s (kernel) + 7.504s (initrd) + 8.112s (userspace) = 19.568s. Mar 6 03:01:33.482550 systemd[1]: Started sshd@1-10.128.0.102:22-20.161.92.111:53140.service - OpenSSH per-connection server daemon (20.161.92.111:53140). Mar 6 03:01:34.000171 systemd-resolved[1367]: Clock change detected. Flushing caches. Mar 6 03:01:34.001807 google-clock-skew[1703]: INFO Synced system time with hardware clock. Mar 6 03:01:34.076840 sshd[1755]: Accepted publickey for core from 20.161.92.111 port 53140 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:01:34.079503 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:34.089316 systemd-logind[1518]: New session 2 of user core. Mar 6 03:01:34.093357 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 03:01:34.191710 sshd[1758]: Connection closed by 20.161.92.111 port 53140 Mar 6 03:01:34.193330 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Mar 6 03:01:34.199796 systemd[1]: sshd@1-10.128.0.102:22-20.161.92.111:53140.service: Deactivated successfully. Mar 6 03:01:34.203947 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 03:01:34.208420 systemd-logind[1518]: Session 2 logged out. Waiting for processes to exit. Mar 6 03:01:34.210622 systemd-logind[1518]: Removed session 2. Mar 6 03:01:34.242449 systemd[1]: Started sshd@2-10.128.0.102:22-20.161.92.111:53148.service - OpenSSH per-connection server daemon (20.161.92.111:53148). Mar 6 03:01:34.416758 kubelet[1741]: E0306 03:01:34.416696 1741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 03:01:34.419759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 03:01:34.420017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 03:01:34.420695 systemd[1]: kubelet.service: Consumed 1.283s CPU time, 269.3M memory peak. Mar 6 03:01:34.487534 sshd[1765]: Accepted publickey for core from 20.161.92.111 port 53148 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:01:34.489182 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:34.497143 systemd-logind[1518]: New session 3 of user core. Mar 6 03:01:34.510298 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 03:01:34.595186 sshd[1769]: Connection closed by 20.161.92.111 port 53148 Mar 6 03:01:34.596423 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Mar 6 03:01:34.601418 systemd[1]: sshd@2-10.128.0.102:22-20.161.92.111:53148.service: Deactivated successfully. Mar 6 03:01:34.603799 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 03:01:34.606672 systemd-logind[1518]: Session 3 logged out. Waiting for processes to exit. Mar 6 03:01:34.608364 systemd-logind[1518]: Removed session 3. Mar 6 03:01:34.641233 systemd[1]: Started sshd@3-10.128.0.102:22-20.161.92.111:53158.service - OpenSSH per-connection server daemon (20.161.92.111:53158). Mar 6 03:01:34.878780 sshd[1775]: Accepted publickey for core from 20.161.92.111 port 53158 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:01:34.880782 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:34.888145 systemd-logind[1518]: New session 4 of user core. Mar 6 03:01:34.898305 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 03:01:34.989567 sshd[1778]: Connection closed by 20.161.92.111 port 53158 Mar 6 03:01:34.990436 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Mar 6 03:01:34.996382 systemd-logind[1518]: Session 4 logged out. Waiting for processes to exit. Mar 6 03:01:34.996865 systemd[1]: sshd@3-10.128.0.102:22-20.161.92.111:53158.service: Deactivated successfully. Mar 6 03:01:34.999585 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 03:01:35.001901 systemd-logind[1518]: Removed session 4. Mar 6 03:01:35.033262 systemd[1]: Started sshd@4-10.128.0.102:22-20.161.92.111:53160.service - OpenSSH per-connection server daemon (20.161.92.111:53160). Mar 6 03:01:35.249217 sshd[1784]: Accepted publickey for core from 20.161.92.111 port 53160 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:01:35.250752 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:35.258151 systemd-logind[1518]: New session 5 of user core. Mar 6 03:01:35.267318 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 03:01:35.336948 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 03:01:35.337464 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 03:01:35.352401 sudo[1788]: pam_unix(sudo:session): session closed for user root Mar 6 03:01:35.383450 sshd[1787]: Connection closed by 20.161.92.111 port 53160 Mar 6 03:01:35.384566 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Mar 6 03:01:35.390764 systemd-logind[1518]: Session 5 logged out. Waiting for processes to exit. Mar 6 03:01:35.391260 systemd[1]: sshd@4-10.128.0.102:22-20.161.92.111:53160.service: Deactivated successfully. Mar 6 03:01:35.393709 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 03:01:35.396190 systemd-logind[1518]: Removed session 5. Mar 6 03:01:35.430309 systemd[1]: Started sshd@5-10.128.0.102:22-20.161.92.111:53168.service - OpenSSH per-connection server daemon (20.161.92.111:53168). Mar 6 03:01:35.678658 sshd[1794]: Accepted publickey for core from 20.161.92.111 port 53168 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:01:35.680711 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:35.688146 systemd-logind[1518]: New session 6 of user core. Mar 6 03:01:35.697330 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 03:01:35.761827 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 03:01:35.762342 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 03:01:35.769315 sudo[1799]: pam_unix(sudo:session): session closed for user root Mar 6 03:01:35.782931 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 6 03:01:35.783418 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 03:01:35.796134 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 03:01:35.843678 augenrules[1821]: No rules Mar 6 03:01:35.844559 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 03:01:35.844801 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 03:01:35.846460 sudo[1798]: pam_unix(sudo:session): session closed for user root Mar 6 03:01:35.882705 sshd[1797]: Connection closed by 20.161.92.111 port 53168 Mar 6 03:01:35.884366 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Mar 6 03:01:35.889149 systemd[1]: sshd@5-10.128.0.102:22-20.161.92.111:53168.service: Deactivated successfully. Mar 6 03:01:35.891583 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 03:01:35.893576 systemd-logind[1518]: Session 6 logged out. Waiting for processes to exit. Mar 6 03:01:35.895724 systemd-logind[1518]: Removed session 6. Mar 6 03:01:35.932283 systemd[1]: Started sshd@6-10.128.0.102:22-20.161.92.111:53176.service - OpenSSH per-connection server daemon (20.161.92.111:53176). Mar 6 03:01:36.175468 sshd[1830]: Accepted publickey for core from 20.161.92.111 port 53176 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:01:36.177221 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:01:36.184613 systemd-logind[1518]: New session 7 of user core. Mar 6 03:01:36.190285 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 03:01:36.261346 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 03:01:36.261852 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 03:01:36.778407 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 03:01:36.792783 (dockerd)[1852]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 03:01:37.153340 dockerd[1852]: time="2026-03-06T03:01:37.150347560Z" level=info msg="Starting up" Mar 6 03:01:37.155471 dockerd[1852]: time="2026-03-06T03:01:37.155431456Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 6 03:01:37.170577 dockerd[1852]: time="2026-03-06T03:01:37.170517573Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 6 03:01:37.355459 dockerd[1852]: time="2026-03-06T03:01:37.355302505Z" level=info msg="Loading containers: start." Mar 6 03:01:37.375098 kernel: Initializing XFRM netlink socket Mar 6 03:01:37.714284 systemd-networkd[1410]: docker0: Link UP Mar 6 03:01:37.721228 dockerd[1852]: time="2026-03-06T03:01:37.721162875Z" level=info msg="Loading containers: done." Mar 6 03:01:37.739711 dockerd[1852]: time="2026-03-06T03:01:37.739636939Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 03:01:37.739935 dockerd[1852]: time="2026-03-06T03:01:37.739745622Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 6 03:01:37.739935 dockerd[1852]: time="2026-03-06T03:01:37.739848520Z" level=info msg="Initializing buildkit" Mar 6 03:01:37.772207 dockerd[1852]: time="2026-03-06T03:01:37.772158468Z" level=info msg="Completed buildkit initialization" Mar 6 03:01:37.783561 dockerd[1852]: time="2026-03-06T03:01:37.783511736Z" level=info msg="Daemon has completed initialization" Mar 6 03:01:37.785173 dockerd[1852]: time="2026-03-06T03:01:37.783700997Z" level=info msg="API listen on /run/docker.sock" Mar 6 03:01:37.783803 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 03:01:38.566712 containerd[1544]: time="2026-03-06T03:01:38.566659756Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 6 03:01:39.051152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3714119889.mount: Deactivated successfully. Mar 6 03:01:40.762140 containerd[1544]: time="2026-03-06T03:01:40.762046201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:40.763789 containerd[1544]: time="2026-03-06T03:01:40.763476429Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30117617" Mar 6 03:01:40.764935 containerd[1544]: time="2026-03-06T03:01:40.764889507Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:40.768548 containerd[1544]: time="2026-03-06T03:01:40.768495482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:40.769803 containerd[1544]: time="2026-03-06T03:01:40.769758150Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 2.203045473s" Mar 6 03:01:40.769912 containerd[1544]: time="2026-03-06T03:01:40.769810513Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 6 03:01:40.770517 containerd[1544]: time="2026-03-06T03:01:40.770457781Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 6 03:01:42.335678 containerd[1544]: time="2026-03-06T03:01:42.335600886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:42.337178 containerd[1544]: time="2026-03-06T03:01:42.337127287Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26022056" Mar 6 03:01:42.338516 containerd[1544]: time="2026-03-06T03:01:42.338444525Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:42.342552 containerd[1544]: time="2026-03-06T03:01:42.342507729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:42.345285 containerd[1544]: time="2026-03-06T03:01:42.345241808Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.574739891s" Mar 6 03:01:42.345482 containerd[1544]: time="2026-03-06T03:01:42.345288773Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 6 03:01:42.346299 containerd[1544]: time="2026-03-06T03:01:42.345898353Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 6 03:01:43.594907 containerd[1544]: time="2026-03-06T03:01:43.594846584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:43.596263 containerd[1544]: time="2026-03-06T03:01:43.596208026Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162974" Mar 6 03:01:43.597986 containerd[1544]: time="2026-03-06T03:01:43.597918542Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:43.601215 containerd[1544]: time="2026-03-06T03:01:43.601144300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:43.602580 containerd[1544]: time="2026-03-06T03:01:43.602418930Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.256486079s" Mar 6 03:01:43.602580 containerd[1544]: time="2026-03-06T03:01:43.602463184Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 6 03:01:43.603554 containerd[1544]: time="2026-03-06T03:01:43.603234084Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 6 03:01:44.668560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount754475258.mount: Deactivated successfully. Mar 6 03:01:44.672619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 03:01:44.676321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:01:45.015492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:01:45.030527 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 03:01:45.117467 kubelet[2144]: E0306 03:01:45.117387 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 03:01:45.124874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 03:01:45.125459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 03:01:45.126012 systemd[1]: kubelet.service: Consumed 255ms CPU time, 109.2M memory peak. Mar 6 03:01:45.543133 containerd[1544]: time="2026-03-06T03:01:45.543054161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:45.544357 containerd[1544]: time="2026-03-06T03:01:45.544300479Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828974" Mar 6 03:01:45.545910 containerd[1544]: time="2026-03-06T03:01:45.545837727Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:45.548745 containerd[1544]: time="2026-03-06T03:01:45.548679298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:45.549998 containerd[1544]: time="2026-03-06T03:01:45.549498403Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.946215552s" Mar 6 03:01:45.549998 containerd[1544]: time="2026-03-06T03:01:45.549545416Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 6 03:01:45.550408 containerd[1544]: time="2026-03-06T03:01:45.550362175Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 6 03:01:45.933218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2126862854.mount: Deactivated successfully. Mar 6 03:01:47.162066 containerd[1544]: time="2026-03-06T03:01:47.162007142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:47.163967 containerd[1544]: time="2026-03-06T03:01:47.163911284Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20943692" Mar 6 03:01:47.165128 containerd[1544]: time="2026-03-06T03:01:47.164403830Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:47.168404 containerd[1544]: time="2026-03-06T03:01:47.168362353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:47.170302 containerd[1544]: time="2026-03-06T03:01:47.170201750Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.619801396s" Mar 6 03:01:47.170415 containerd[1544]: time="2026-03-06T03:01:47.170308346Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 6 03:01:47.171330 containerd[1544]: time="2026-03-06T03:01:47.171274337Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 6 03:01:47.501802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1893599476.mount: Deactivated successfully. Mar 6 03:01:47.510437 containerd[1544]: time="2026-03-06T03:01:47.510375503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 03:01:47.511388 containerd[1544]: time="2026-03-06T03:01:47.511346572Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321348" Mar 6 03:01:47.512816 containerd[1544]: time="2026-03-06T03:01:47.512751462Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 03:01:47.515505 containerd[1544]: time="2026-03-06T03:01:47.515439229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 03:01:47.516963 containerd[1544]: time="2026-03-06T03:01:47.516359653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 345.046577ms" Mar 6 03:01:47.516963 containerd[1544]: time="2026-03-06T03:01:47.516402576Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 6 03:01:47.517337 containerd[1544]: time="2026-03-06T03:01:47.517292691Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 6 03:01:47.900611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount23876845.mount: Deactivated successfully. Mar 6 03:01:49.202343 containerd[1544]: time="2026-03-06T03:01:49.202272915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:49.203845 containerd[1544]: time="2026-03-06T03:01:49.203802791Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23720121" Mar 6 03:01:49.205489 containerd[1544]: time="2026-03-06T03:01:49.205035748Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:49.208929 containerd[1544]: time="2026-03-06T03:01:49.208373381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:01:49.209746 containerd[1544]: time="2026-03-06T03:01:49.209704545Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.692375182s" Mar 6 03:01:49.209914 containerd[1544]: time="2026-03-06T03:01:49.209752070Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 6 03:01:54.258339 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:01:54.258640 systemd[1]: kubelet.service: Consumed 255ms CPU time, 109.2M memory peak. Mar 6 03:01:54.261744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:01:54.300426 systemd[1]: Reload requested from client PID 2300 ('systemctl') (unit session-7.scope)... Mar 6 03:01:54.300462 systemd[1]: Reloading... Mar 6 03:01:54.461547 zram_generator::config[2345]: No configuration found. Mar 6 03:01:54.804672 systemd[1]: Reloading finished in 503 ms. Mar 6 03:01:54.872891 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 6 03:01:54.873021 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 6 03:01:54.873458 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:01:54.873591 systemd[1]: kubelet.service: Consumed 164ms CPU time, 98.3M memory peak. Mar 6 03:01:54.876207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:01:55.823763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:01:55.834779 (kubelet)[2396]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 03:01:55.892454 kubelet[2396]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 03:01:55.892454 kubelet[2396]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 03:01:55.892454 kubelet[2396]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 03:01:55.893015 kubelet[2396]: I0306 03:01:55.892547 2396 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 03:01:56.566125 kubelet[2396]: I0306 03:01:56.565744 2396 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 03:01:56.566125 kubelet[2396]: I0306 03:01:56.565782 2396 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 03:01:56.566367 kubelet[2396]: I0306 03:01:56.566334 2396 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 03:01:56.621288 kubelet[2396]: E0306 03:01:56.621229 2396 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.102:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 03:01:56.622531 kubelet[2396]: I0306 03:01:56.622307 2396 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 03:01:56.632700 kubelet[2396]: I0306 03:01:56.632646 2396 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 03:01:56.636828 kubelet[2396]: I0306 03:01:56.636789 2396 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 03:01:56.638306 kubelet[2396]: I0306 03:01:56.638239 2396 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 03:01:56.638527 kubelet[2396]: I0306 03:01:56.638293 2396 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 03:01:56.638734 kubelet[2396]: I0306 03:01:56.638536 2396 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 03:01:56.638734 kubelet[2396]: I0306 03:01:56.638555 2396 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 03:01:56.638734 kubelet[2396]: I0306 03:01:56.638730 2396 state_mem.go:36] "Initialized new in-memory state store" Mar 6 03:01:56.645135 kubelet[2396]: I0306 03:01:56.645100 2396 kubelet.go:480] "Attempting to sync node with API server" Mar 6 03:01:56.645250 kubelet[2396]: I0306 03:01:56.645149 2396 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 03:01:56.645250 kubelet[2396]: I0306 03:01:56.645190 2396 kubelet.go:386] "Adding apiserver pod source" Mar 6 03:01:56.648470 kubelet[2396]: I0306 03:01:56.648141 2396 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 03:01:56.655676 kubelet[2396]: E0306 03:01:56.655613 2396 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011&limit=500&resourceVersion=0\": dial tcp 10.128.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 03:01:56.656886 kubelet[2396]: I0306 03:01:56.655786 2396 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 03:01:56.656886 kubelet[2396]: I0306 03:01:56.656758 2396 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 03:01:56.658622 kubelet[2396]: W0306 03:01:56.657800 2396 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 03:01:56.665674 kubelet[2396]: E0306 03:01:56.665636 2396 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 03:01:56.678593 kubelet[2396]: I0306 03:01:56.678561 2396 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 03:01:56.678724 kubelet[2396]: I0306 03:01:56.678635 2396 server.go:1289] "Started kubelet" Mar 6 03:01:56.678877 kubelet[2396]: I0306 03:01:56.678835 2396 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 03:01:56.690107 kubelet[2396]: I0306 03:01:56.689651 2396 server.go:317] "Adding debug handlers to kubelet server" Mar 6 03:01:56.690107 kubelet[2396]: I0306 03:01:56.689930 2396 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 03:01:56.690539 kubelet[2396]: I0306 03:01:56.690510 2396 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 03:01:56.694023 kubelet[2396]: E0306 03:01:56.692184 2396 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.102:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.102:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011.189a2163fe79d357 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011,UID:ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011,},FirstTimestamp:2026-03-06 03:01:56.678587223 +0000 UTC m=+0.837554664,LastTimestamp:2026-03-06 03:01:56.678587223 +0000 UTC m=+0.837554664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011,}" Mar 6 03:01:56.700476 kubelet[2396]: I0306 03:01:56.700438 2396 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 03:01:56.702315 kubelet[2396]: E0306 03:01:56.701741 2396 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 03:01:56.702315 kubelet[2396]: I0306 03:01:56.701880 2396 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 03:01:56.706519 kubelet[2396]: E0306 03:01:56.706442 2396 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" not found" Mar 6 03:01:56.706618 kubelet[2396]: I0306 03:01:56.706535 2396 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 03:01:56.706847 kubelet[2396]: I0306 03:01:56.706813 2396 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 03:01:56.706937 kubelet[2396]: I0306 03:01:56.706887 2396 reconciler.go:26] "Reconciler: start to sync state" Mar 6 03:01:56.707572 kubelet[2396]: E0306 03:01:56.707526 2396 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 03:01:56.708044 kubelet[2396]: I0306 03:01:56.708013 2396 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 03:01:56.711109 kubelet[2396]: I0306 03:01:56.709991 2396 factory.go:223] Registration of the containerd container factory successfully Mar 6 03:01:56.711109 kubelet[2396]: I0306 03:01:56.710013 2396 factory.go:223] Registration of the systemd container factory successfully Mar 6 03:01:56.711892 kubelet[2396]: I0306 03:01:56.711855 2396 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 03:01:56.716135 kubelet[2396]: E0306 03:01:56.716067 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011?timeout=10s\": dial tcp 10.128.0.102:6443: connect: connection refused" interval="200ms" Mar 6 03:01:56.731129 kubelet[2396]: I0306 03:01:56.731063 2396 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 03:01:56.731129 kubelet[2396]: I0306 03:01:56.731100 2396 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 03:01:56.731291 kubelet[2396]: I0306 03:01:56.731149 2396 state_mem.go:36] "Initialized new in-memory state store" Mar 6 03:01:56.735202 kubelet[2396]: I0306 03:01:56.735177 2396 policy_none.go:49] "None policy: Start" Mar 6 03:01:56.735202 kubelet[2396]: I0306 03:01:56.735203 2396 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 03:01:56.735202 kubelet[2396]: I0306 03:01:56.735219 2396 state_mem.go:35] "Initializing new in-memory state store" Mar 6 03:01:56.747823 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 6 03:01:56.752530 kubelet[2396]: I0306 03:01:56.752478 2396 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 03:01:56.752530 kubelet[2396]: I0306 03:01:56.752510 2396 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 03:01:56.752530 kubelet[2396]: I0306 03:01:56.752535 2396 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 03:01:56.752726 kubelet[2396]: I0306 03:01:56.752546 2396 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 03:01:56.752726 kubelet[2396]: E0306 03:01:56.752604 2396 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 03:01:56.753635 kubelet[2396]: E0306 03:01:56.753597 2396 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 03:01:56.762119 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 6 03:01:56.767009 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 6 03:01:56.783265 kubelet[2396]: E0306 03:01:56.783032 2396 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 03:01:56.783957 kubelet[2396]: I0306 03:01:56.783938 2396 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 03:01:56.783957 kubelet[2396]: I0306 03:01:56.784123 2396 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 03:01:56.783957 kubelet[2396]: I0306 03:01:56.784610 2396 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 03:01:56.787477 kubelet[2396]: E0306 03:01:56.787453 2396 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 03:01:56.787802 kubelet[2396]: E0306 03:01:56.787774 2396 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" not found" Mar 6 03:01:56.878718 systemd[1]: Created slice kubepods-burstable-podea1051d5ce54202aab56f89a8fa86e3b.slice - libcontainer container kubepods-burstable-podea1051d5ce54202aab56f89a8fa86e3b.slice. Mar 6 03:01:56.897996 kubelet[2396]: I0306 03:01:56.897297 2396 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:56.897996 kubelet[2396]: E0306 03:01:56.897669 2396 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.102:6443/api/v1/nodes\": dial tcp 10.128.0.102:6443: connect: connection refused" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:56.899386 kubelet[2396]: E0306 03:01:56.899356 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" not found" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:56.906676 systemd[1]: Created slice kubepods-burstable-pode8a9000c8ae7d8e2ec5fda20f9ec145e.slice - libcontainer container kubepods-burstable-pode8a9000c8ae7d8e2ec5fda20f9ec145e.slice. Mar 6 03:01:56.909288 kubelet[2396]: I0306 03:01:56.909157 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8a9000c8ae7d8e2ec5fda20f9ec145e-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"e8a9000c8ae7d8e2ec5fda20f9ec145e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:56.909522 kubelet[2396]: I0306 03:01:56.909439 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea1051d5ce54202aab56f89a8fa86e3b-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"ea1051d5ce54202aab56f89a8fa86e3b\") " pod="kube-system/kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:56.909735 kubelet[2396]: I0306 03:01:56.909622 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea1051d5ce54202aab56f89a8fa86e3b-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"ea1051d5ce54202aab56f89a8fa86e3b\") " pod="kube-system/kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:56.910048 kubelet[2396]: I0306 03:01:56.909661 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea1051d5ce54202aab56f89a8fa86e3b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"ea1051d5ce54202aab56f89a8fa86e3b\") " pod="kube-system/kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:56.910048 kubelet[2396]: I0306 03:01:56.909820 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8a9000c8ae7d8e2ec5fda20f9ec145e-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"e8a9000c8ae7d8e2ec5fda20f9ec145e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:56.910735 kubelet[2396]: E0306 03:01:56.910672 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" not found" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:56.911045 kubelet[2396]: I0306 03:01:56.911020 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e8a9000c8ae7d8e2ec5fda20f9ec145e-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"e8a9000c8ae7d8e2ec5fda20f9ec145e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:56.911485 kubelet[2396]: I0306 03:01:56.911331 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8a9000c8ae7d8e2ec5fda20f9ec145e-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"e8a9000c8ae7d8e2ec5fda20f9ec145e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:56.913674 systemd[1]: Created slice kubepods-burstable-pod3adde9f65470df86612c19d5fd585571.slice - libcontainer container kubepods-burstable-pod3adde9f65470df86612c19d5fd585571.slice. Mar 6 03:01:56.916495 kubelet[2396]: E0306 03:01:56.916466 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" not found" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:56.918811 kubelet[2396]: E0306 03:01:56.918750 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011?timeout=10s\": dial tcp 10.128.0.102:6443: connect: connection refused" interval="400ms" Mar 6 03:01:57.012492 kubelet[2396]: I0306 03:01:57.012409 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8a9000c8ae7d8e2ec5fda20f9ec145e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"e8a9000c8ae7d8e2ec5fda20f9ec145e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:57.012492 kubelet[2396]: I0306 03:01:57.012471 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3adde9f65470df86612c19d5fd585571-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"3adde9f65470df86612c19d5fd585571\") " pod="kube-system/kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:57.104102 kubelet[2396]: I0306 03:01:57.103863 2396 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:57.104509 kubelet[2396]: E0306 03:01:57.104454 2396 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.102:6443/api/v1/nodes\": dial tcp 10.128.0.102:6443: connect: connection refused" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:57.197455 kubelet[2396]: E0306 03:01:57.197225 2396 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.102:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.102:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011.189a2163fe79d357 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011,UID:ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011,},FirstTimestamp:2026-03-06 03:01:56.678587223 +0000 UTC m=+0.837554664,LastTimestamp:2026-03-06 03:01:56.678587223 +0000 UTC m=+0.837554664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011,}" Mar 6 03:01:57.201548 containerd[1544]: time="2026-03-06T03:01:57.201481494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011,Uid:ea1051d5ce54202aab56f89a8fa86e3b,Namespace:kube-system,Attempt:0,}" Mar 6 03:01:57.213305 containerd[1544]: time="2026-03-06T03:01:57.213237260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011,Uid:e8a9000c8ae7d8e2ec5fda20f9ec145e,Namespace:kube-system,Attempt:0,}" Mar 6 03:01:57.228319 containerd[1544]: time="2026-03-06T03:01:57.228125857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011,Uid:3adde9f65470df86612c19d5fd585571,Namespace:kube-system,Attempt:0,}" Mar 6 03:01:57.236127 containerd[1544]: time="2026-03-06T03:01:57.236058409Z" level=info msg="connecting to shim 49255c79f2e7daa188a5169a5a6b2cda9c84010024c5c02ec29fd7485b685f12" address="unix:///run/containerd/s/7dfad1b3e2cedda4625a20199526880338b2f8e8b9aab13bdacee0d33d2f21d2" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:01:57.294135 containerd[1544]: time="2026-03-06T03:01:57.292808571Z" level=info msg="connecting to shim 3b7dc3af6b8810892906e1a5895a76d82c85ba1f3d33ce0a2d5aeaeac9d7ee63" address="unix:///run/containerd/s/f0d00c726073171d50d22df398048cf5ee6ab9cb84f22cc4ea78708cd76d28fd" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:01:57.302687 containerd[1544]: time="2026-03-06T03:01:57.300452974Z" level=info msg="connecting to shim c63b12f631e3bfbc07a18c10cc6c61daf4be3938d3a0ab698d680d7a8a0891fc" address="unix:///run/containerd/s/6bf28cc96998ab86d3a943a2f35e2264e70f46bf12ed0140e2cf8dd003b9a8bd" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:01:57.302328 systemd[1]: Started cri-containerd-49255c79f2e7daa188a5169a5a6b2cda9c84010024c5c02ec29fd7485b685f12.scope - libcontainer container 49255c79f2e7daa188a5169a5a6b2cda9c84010024c5c02ec29fd7485b685f12. Mar 6 03:01:57.321653 kubelet[2396]: E0306 03:01:57.321599 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011?timeout=10s\": dial tcp 10.128.0.102:6443: connect: connection refused" interval="800ms" Mar 6 03:01:57.370384 systemd[1]: Started cri-containerd-3b7dc3af6b8810892906e1a5895a76d82c85ba1f3d33ce0a2d5aeaeac9d7ee63.scope - libcontainer container 3b7dc3af6b8810892906e1a5895a76d82c85ba1f3d33ce0a2d5aeaeac9d7ee63. Mar 6 03:01:57.385302 systemd[1]: Started cri-containerd-c63b12f631e3bfbc07a18c10cc6c61daf4be3938d3a0ab698d680d7a8a0891fc.scope - libcontainer container c63b12f631e3bfbc07a18c10cc6c61daf4be3938d3a0ab698d680d7a8a0891fc. Mar 6 03:01:57.460336 containerd[1544]: time="2026-03-06T03:01:57.459759652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011,Uid:ea1051d5ce54202aab56f89a8fa86e3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"49255c79f2e7daa188a5169a5a6b2cda9c84010024c5c02ec29fd7485b685f12\"" Mar 6 03:01:57.477189 kubelet[2396]: E0306 03:01:57.477016 2396 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec5" Mar 6 03:01:57.487245 containerd[1544]: time="2026-03-06T03:01:57.486937398Z" level=info msg="CreateContainer within sandbox \"49255c79f2e7daa188a5169a5a6b2cda9c84010024c5c02ec29fd7485b685f12\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 03:01:57.499585 containerd[1544]: time="2026-03-06T03:01:57.499513595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011,Uid:3adde9f65470df86612c19d5fd585571,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b7dc3af6b8810892906e1a5895a76d82c85ba1f3d33ce0a2d5aeaeac9d7ee63\"" Mar 6 03:01:57.502548 kubelet[2396]: E0306 03:01:57.502512 2396 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec5" Mar 6 03:01:57.505273 containerd[1544]: time="2026-03-06T03:01:57.505167961Z" level=info msg="Container 34304a1529e8090b8fded0be382c4ab224ff439979d219cf87855afd8e3ff81b: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:01:57.508865 containerd[1544]: time="2026-03-06T03:01:57.508509488Z" level=info msg="CreateContainer within sandbox \"3b7dc3af6b8810892906e1a5895a76d82c85ba1f3d33ce0a2d5aeaeac9d7ee63\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 03:01:57.510268 kubelet[2396]: I0306 03:01:57.510238 2396 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:57.510675 kubelet[2396]: E0306 03:01:57.510625 2396 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.102:6443/api/v1/nodes\": dial tcp 10.128.0.102:6443: connect: connection refused" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:57.519845 containerd[1544]: time="2026-03-06T03:01:57.519716931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011,Uid:e8a9000c8ae7d8e2ec5fda20f9ec145e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c63b12f631e3bfbc07a18c10cc6c61daf4be3938d3a0ab698d680d7a8a0891fc\"" Mar 6 03:01:57.522063 kubelet[2396]: E0306 03:01:57.522001 2396 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff68" Mar 6 03:01:57.525108 containerd[1544]: time="2026-03-06T03:01:57.524985042Z" level=info msg="CreateContainer within sandbox \"49255c79f2e7daa188a5169a5a6b2cda9c84010024c5c02ec29fd7485b685f12\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"34304a1529e8090b8fded0be382c4ab224ff439979d219cf87855afd8e3ff81b\"" Mar 6 03:01:57.526381 containerd[1544]: time="2026-03-06T03:01:57.526331026Z" level=info msg="StartContainer for \"34304a1529e8090b8fded0be382c4ab224ff439979d219cf87855afd8e3ff81b\"" Mar 6 03:01:57.527989 containerd[1544]: time="2026-03-06T03:01:57.527946756Z" level=info msg="connecting to shim 34304a1529e8090b8fded0be382c4ab224ff439979d219cf87855afd8e3ff81b" address="unix:///run/containerd/s/7dfad1b3e2cedda4625a20199526880338b2f8e8b9aab13bdacee0d33d2f21d2" protocol=ttrpc version=3 Mar 6 03:01:57.528601 containerd[1544]: time="2026-03-06T03:01:57.528349406Z" level=info msg="CreateContainer within sandbox \"c63b12f631e3bfbc07a18c10cc6c61daf4be3938d3a0ab698d680d7a8a0891fc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 03:01:57.529399 containerd[1544]: time="2026-03-06T03:01:57.529370097Z" level=info msg="Container 11b53b198abe4c8ea20c0efe498e4850dc2b110494ec7137a82ee8626466f8da: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:01:57.540773 containerd[1544]: time="2026-03-06T03:01:57.540738881Z" level=info msg="CreateContainer within sandbox \"3b7dc3af6b8810892906e1a5895a76d82c85ba1f3d33ce0a2d5aeaeac9d7ee63\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"11b53b198abe4c8ea20c0efe498e4850dc2b110494ec7137a82ee8626466f8da\"" Mar 6 03:01:57.541799 containerd[1544]: time="2026-03-06T03:01:57.541752963Z" level=info msg="StartContainer for \"11b53b198abe4c8ea20c0efe498e4850dc2b110494ec7137a82ee8626466f8da\"" Mar 6 03:01:57.544028 containerd[1544]: time="2026-03-06T03:01:57.543977081Z" level=info msg="Container 94bd5ef483019f402828fbf7b88c7d1c7efc24dce20519859d421cfe74e57480: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:01:57.545345 containerd[1544]: time="2026-03-06T03:01:57.545305565Z" level=info msg="connecting to shim 11b53b198abe4c8ea20c0efe498e4850dc2b110494ec7137a82ee8626466f8da" address="unix:///run/containerd/s/f0d00c726073171d50d22df398048cf5ee6ab9cb84f22cc4ea78708cd76d28fd" protocol=ttrpc version=3 Mar 6 03:01:57.558429 containerd[1544]: time="2026-03-06T03:01:57.558385858Z" level=info msg="CreateContainer within sandbox \"c63b12f631e3bfbc07a18c10cc6c61daf4be3938d3a0ab698d680d7a8a0891fc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"94bd5ef483019f402828fbf7b88c7d1c7efc24dce20519859d421cfe74e57480\"" Mar 6 03:01:57.560235 containerd[1544]: time="2026-03-06T03:01:57.560194575Z" level=info msg="StartContainer for \"94bd5ef483019f402828fbf7b88c7d1c7efc24dce20519859d421cfe74e57480\"" Mar 6 03:01:57.562307 systemd[1]: Started cri-containerd-34304a1529e8090b8fded0be382c4ab224ff439979d219cf87855afd8e3ff81b.scope - libcontainer container 34304a1529e8090b8fded0be382c4ab224ff439979d219cf87855afd8e3ff81b. Mar 6 03:01:57.565205 containerd[1544]: time="2026-03-06T03:01:57.565169234Z" level=info msg="connecting to shim 94bd5ef483019f402828fbf7b88c7d1c7efc24dce20519859d421cfe74e57480" address="unix:///run/containerd/s/6bf28cc96998ab86d3a943a2f35e2264e70f46bf12ed0140e2cf8dd003b9a8bd" protocol=ttrpc version=3 Mar 6 03:01:57.601498 systemd[1]: Started cri-containerd-11b53b198abe4c8ea20c0efe498e4850dc2b110494ec7137a82ee8626466f8da.scope - libcontainer container 11b53b198abe4c8ea20c0efe498e4850dc2b110494ec7137a82ee8626466f8da. Mar 6 03:01:57.613489 systemd[1]: Started cri-containerd-94bd5ef483019f402828fbf7b88c7d1c7efc24dce20519859d421cfe74e57480.scope - libcontainer container 94bd5ef483019f402828fbf7b88c7d1c7efc24dce20519859d421cfe74e57480. Mar 6 03:01:57.724165 kubelet[2396]: E0306 03:01:57.723813 2396 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 03:01:57.732967 containerd[1544]: time="2026-03-06T03:01:57.732909589Z" level=info msg="StartContainer for \"34304a1529e8090b8fded0be382c4ab224ff439979d219cf87855afd8e3ff81b\" returns successfully" Mar 6 03:01:57.742406 containerd[1544]: time="2026-03-06T03:01:57.742317866Z" level=info msg="StartContainer for \"11b53b198abe4c8ea20c0efe498e4850dc2b110494ec7137a82ee8626466f8da\" returns successfully" Mar 6 03:01:57.768286 kubelet[2396]: E0306 03:01:57.768048 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" not found" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:57.770827 kubelet[2396]: E0306 03:01:57.770792 2396 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 03:01:57.775099 kubelet[2396]: E0306 03:01:57.774860 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" not found" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:57.804519 containerd[1544]: time="2026-03-06T03:01:57.804473010Z" level=info msg="StartContainer for \"94bd5ef483019f402828fbf7b88c7d1c7efc24dce20519859d421cfe74e57480\" returns successfully" Mar 6 03:01:58.317165 kubelet[2396]: I0306 03:01:58.316793 2396 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:58.788995 kubelet[2396]: E0306 03:01:58.788859 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" not found" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:58.789614 kubelet[2396]: E0306 03:01:58.789578 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" not found" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:58.790102 kubelet[2396]: E0306 03:01:58.790048 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" not found" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:01:59.792322 kubelet[2396]: E0306 03:01:59.792278 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" not found" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:00.016793 kubelet[2396]: E0306 03:02:00.016749 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" not found" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:01.627777 kubelet[2396]: E0306 03:02:01.627727 2396 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" not found" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:01.667596 kubelet[2396]: I0306 03:02:01.667501 2396 apiserver.go:52] "Watching apiserver" Mar 6 03:02:01.703009 kubelet[2396]: I0306 03:02:01.702958 2396 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:01.703009 kubelet[2396]: E0306 03:02:01.703016 2396 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\": node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" not found" Mar 6 03:02:01.707199 kubelet[2396]: I0306 03:02:01.707138 2396 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 03:02:01.716172 kubelet[2396]: I0306 03:02:01.716115 2396 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:01.776442 kubelet[2396]: E0306 03:02:01.776392 2396 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:01.776442 kubelet[2396]: I0306 03:02:01.776441 2396 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:01.779041 kubelet[2396]: E0306 03:02:01.778973 2396 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:01.779041 kubelet[2396]: I0306 03:02:01.779013 2396 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:01.782752 kubelet[2396]: E0306 03:02:01.782715 2396 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:02.013167 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 6 03:02:03.832005 systemd[1]: Reload requested from client PID 2683 ('systemctl') (unit session-7.scope)... Mar 6 03:02:03.832030 systemd[1]: Reloading... Mar 6 03:02:03.964369 zram_generator::config[2723]: No configuration found. Mar 6 03:02:04.289915 systemd[1]: Reloading finished in 457 ms. Mar 6 03:02:04.330833 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:02:04.346233 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 03:02:04.346582 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:04.346687 systemd[1]: kubelet.service: Consumed 1.346s CPU time, 133.4M memory peak. Mar 6 03:02:04.350286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 03:02:04.692295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 03:02:04.709696 (kubelet)[2775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 03:02:04.789526 kubelet[2775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 03:02:04.789526 kubelet[2775]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 03:02:04.789526 kubelet[2775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 03:02:04.789526 kubelet[2775]: I0306 03:02:04.789335 2775 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 03:02:04.806717 kubelet[2775]: I0306 03:02:04.806679 2775 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 03:02:04.806943 kubelet[2775]: I0306 03:02:04.806921 2775 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 03:02:04.807565 kubelet[2775]: I0306 03:02:04.807520 2775 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 03:02:04.809977 kubelet[2775]: I0306 03:02:04.809955 2775 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 03:02:04.814882 kubelet[2775]: I0306 03:02:04.814856 2775 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 03:02:04.832038 kubelet[2775]: I0306 03:02:04.832004 2775 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 03:02:04.838009 kubelet[2775]: I0306 03:02:04.837960 2775 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 03:02:04.839041 kubelet[2775]: I0306 03:02:04.838969 2775 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 03:02:04.839583 kubelet[2775]: I0306 03:02:04.839278 2775 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 03:02:04.840392 kubelet[2775]: I0306 03:02:04.840256 2775 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 03:02:04.840751 kubelet[2775]: I0306 03:02:04.840616 2775 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 03:02:04.840955 kubelet[2775]: I0306 03:02:04.840867 2775 state_mem.go:36] "Initialized new in-memory state store" Mar 6 03:02:04.841686 kubelet[2775]: I0306 03:02:04.841663 2775 kubelet.go:480] "Attempting to sync node with API server" Mar 6 03:02:04.842904 kubelet[2775]: I0306 03:02:04.842845 2775 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 03:02:04.843195 kubelet[2775]: I0306 03:02:04.843149 2775 kubelet.go:386] "Adding apiserver pod source" Mar 6 03:02:04.843195 kubelet[2775]: I0306 03:02:04.843266 2775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 03:02:04.855795 kubelet[2775]: I0306 03:02:04.855734 2775 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 03:02:04.860611 kubelet[2775]: I0306 03:02:04.860228 2775 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 03:02:04.911115 kubelet[2775]: I0306 03:02:04.910774 2775 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 03:02:04.911759 kubelet[2775]: I0306 03:02:04.911675 2775 server.go:1289] "Started kubelet" Mar 6 03:02:04.915156 kubelet[2775]: I0306 03:02:04.914233 2775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 03:02:04.916196 kubelet[2775]: I0306 03:02:04.916158 2775 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 03:02:04.916313 kubelet[2775]: I0306 03:02:04.916241 2775 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 03:02:04.916937 kubelet[2775]: I0306 03:02:04.916915 2775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 03:02:04.918053 kubelet[2775]: I0306 03:02:04.917703 2775 server.go:317] "Adding debug handlers to kubelet server" Mar 6 03:02:04.927943 kubelet[2775]: I0306 03:02:04.927918 2775 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 03:02:04.935818 kubelet[2775]: I0306 03:02:04.934594 2775 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 03:02:04.937988 kubelet[2775]: I0306 03:02:04.937949 2775 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 03:02:04.938451 kubelet[2775]: I0306 03:02:04.938162 2775 reconciler.go:26] "Reconciler: start to sync state" Mar 6 03:02:04.943606 kubelet[2775]: I0306 03:02:04.943510 2775 factory.go:223] Registration of the systemd container factory successfully Mar 6 03:02:04.943691 kubelet[2775]: I0306 03:02:04.943620 2775 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 03:02:04.946887 kubelet[2775]: I0306 03:02:04.946503 2775 factory.go:223] Registration of the containerd container factory successfully Mar 6 03:02:04.950034 kubelet[2775]: E0306 03:02:04.949972 2775 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 03:02:04.950319 kubelet[2775]: I0306 03:02:04.950284 2775 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 03:02:04.965915 kubelet[2775]: I0306 03:02:04.965864 2775 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 03:02:04.966524 kubelet[2775]: I0306 03:02:04.966507 2775 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 03:02:04.966902 kubelet[2775]: I0306 03:02:04.966615 2775 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 03:02:04.966902 kubelet[2775]: I0306 03:02:04.966628 2775 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 03:02:04.967372 kubelet[2775]: E0306 03:02:04.967101 2775 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 03:02:05.038753 kubelet[2775]: I0306 03:02:05.038681 2775 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 03:02:05.038753 kubelet[2775]: I0306 03:02:05.038708 2775 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 03:02:05.038978 kubelet[2775]: I0306 03:02:05.038773 2775 state_mem.go:36] "Initialized new in-memory state store" Mar 6 03:02:05.039174 kubelet[2775]: I0306 03:02:05.039023 2775 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 6 03:02:05.039174 kubelet[2775]: I0306 03:02:05.039104 2775 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 6 03:02:05.039174 kubelet[2775]: I0306 03:02:05.039137 2775 policy_none.go:49] "None policy: Start" Mar 6 03:02:05.039174 kubelet[2775]: I0306 03:02:05.039173 2775 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 03:02:05.041862 kubelet[2775]: I0306 03:02:05.039192 2775 state_mem.go:35] "Initializing new in-memory state store" Mar 6 03:02:05.041862 kubelet[2775]: I0306 03:02:05.039448 2775 state_mem.go:75] "Updated machine memory state" Mar 6 03:02:05.058718 kubelet[2775]: E0306 03:02:05.058661 2775 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 03:02:05.059824 kubelet[2775]: I0306 03:02:05.059352 2775 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 03:02:05.059824 kubelet[2775]: I0306 03:02:05.059377 2775 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 03:02:05.061526 kubelet[2775]: I0306 03:02:05.060450 2775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 03:02:05.066218 kubelet[2775]: E0306 03:02:05.066167 2775 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 03:02:05.074391 kubelet[2775]: I0306 03:02:05.074359 2775 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:05.075948 kubelet[2775]: I0306 03:02:05.074851 2775 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:05.081391 kubelet[2775]: I0306 03:02:05.081020 2775 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:05.118518 kubelet[2775]: I0306 03:02:05.117312 2775 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 6 03:02:05.118518 kubelet[2775]: I0306 03:02:05.118207 2775 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 6 03:02:05.118518 kubelet[2775]: I0306 03:02:05.118260 2775 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 6 03:02:05.139416 kubelet[2775]: I0306 03:02:05.139194 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8a9000c8ae7d8e2ec5fda20f9ec145e-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"e8a9000c8ae7d8e2ec5fda20f9ec145e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:05.139416 kubelet[2775]: I0306 03:02:05.139249 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8a9000c8ae7d8e2ec5fda20f9ec145e-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"e8a9000c8ae7d8e2ec5fda20f9ec145e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:05.139416 kubelet[2775]: I0306 03:02:05.139281 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8a9000c8ae7d8e2ec5fda20f9ec145e-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"e8a9000c8ae7d8e2ec5fda20f9ec145e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:05.139416 kubelet[2775]: I0306 03:02:05.139312 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8a9000c8ae7d8e2ec5fda20f9ec145e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"e8a9000c8ae7d8e2ec5fda20f9ec145e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:05.139751 kubelet[2775]: I0306 03:02:05.139343 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3adde9f65470df86612c19d5fd585571-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"3adde9f65470df86612c19d5fd585571\") " pod="kube-system/kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:05.139751 kubelet[2775]: I0306 03:02:05.139374 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea1051d5ce54202aab56f89a8fa86e3b-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"ea1051d5ce54202aab56f89a8fa86e3b\") " pod="kube-system/kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:05.139751 kubelet[2775]: I0306 03:02:05.139406 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e8a9000c8ae7d8e2ec5fda20f9ec145e-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"e8a9000c8ae7d8e2ec5fda20f9ec145e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:05.139751 kubelet[2775]: I0306 03:02:05.139434 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea1051d5ce54202aab56f89a8fa86e3b-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"ea1051d5ce54202aab56f89a8fa86e3b\") " pod="kube-system/kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:05.139939 kubelet[2775]: I0306 03:02:05.139462 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea1051d5ce54202aab56f89a8fa86e3b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" (UID: \"ea1051d5ce54202aab56f89a8fa86e3b\") " pod="kube-system/kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:05.184835 kubelet[2775]: I0306 03:02:05.184798 2775 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:05.197628 kubelet[2775]: I0306 03:02:05.197497 2775 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:05.197784 kubelet[2775]: I0306 03:02:05.197630 2775 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:05.846128 kubelet[2775]: I0306 03:02:05.845980 2775 apiserver.go:52] "Watching apiserver" Mar 6 03:02:05.938258 kubelet[2775]: I0306 03:02:05.938192 2775 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 03:02:06.009565 kubelet[2775]: I0306 03:02:06.009528 2775 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:06.010093 kubelet[2775]: I0306 03:02:06.010042 2775 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:06.012213 kubelet[2775]: I0306 03:02:06.012185 2775 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:06.022537 kubelet[2775]: I0306 03:02:06.022505 2775 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 6 03:02:06.022721 kubelet[2775]: E0306 03:02:06.022567 2775 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:06.025496 kubelet[2775]: I0306 03:02:06.025467 2775 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 6 03:02:06.025640 kubelet[2775]: E0306 03:02:06.025524 2775 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:06.026009 kubelet[2775]: I0306 03:02:06.025908 2775 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 6 03:02:06.026149 kubelet[2775]: E0306 03:02:06.026106 2775 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:06.063810 kubelet[2775]: I0306 03:02:06.063698 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" podStartSLOduration=1.063675898 podStartE2EDuration="1.063675898s" podCreationTimestamp="2026-03-06 03:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:02:06.050164202 +0000 UTC m=+1.332353505" watchObservedRunningTime="2026-03-06 03:02:06.063675898 +0000 UTC m=+1.345865195" Mar 6 03:02:06.065339 kubelet[2775]: I0306 03:02:06.065189 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" podStartSLOduration=1.065173024 podStartE2EDuration="1.065173024s" podCreationTimestamp="2026-03-06 03:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:02:06.062548409 +0000 UTC m=+1.344737713" watchObservedRunningTime="2026-03-06 03:02:06.065173024 +0000 UTC m=+1.347362326" Mar 6 03:02:09.494743 kubelet[2775]: I0306 03:02:09.494665 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" podStartSLOduration=4.494641642 podStartE2EDuration="4.494641642s" podCreationTimestamp="2026-03-06 03:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:02:06.079657596 +0000 UTC m=+1.361846900" watchObservedRunningTime="2026-03-06 03:02:09.494641642 +0000 UTC m=+4.776830945" Mar 6 03:02:09.506060 kubelet[2775]: I0306 03:02:09.506012 2775 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 03:02:09.508408 containerd[1544]: time="2026-03-06T03:02:09.508366869Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 03:02:09.509787 kubelet[2775]: I0306 03:02:09.509338 2775 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 03:02:10.074941 kubelet[2775]: I0306 03:02:10.074900 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4cc0a9e1-76b0-48dd-96bd-eb349bcb0204-kube-proxy\") pod \"kube-proxy-pqhpv\" (UID: \"4cc0a9e1-76b0-48dd-96bd-eb349bcb0204\") " pod="kube-system/kube-proxy-pqhpv" Mar 6 03:02:10.075451 systemd[1]: Created slice kubepods-besteffort-pod4cc0a9e1_76b0_48dd_96bd_eb349bcb0204.slice - libcontainer container kubepods-besteffort-pod4cc0a9e1_76b0_48dd_96bd_eb349bcb0204.slice. Mar 6 03:02:10.077987 kubelet[2775]: I0306 03:02:10.077818 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cc0a9e1-76b0-48dd-96bd-eb349bcb0204-xtables-lock\") pod \"kube-proxy-pqhpv\" (UID: \"4cc0a9e1-76b0-48dd-96bd-eb349bcb0204\") " pod="kube-system/kube-proxy-pqhpv" Mar 6 03:02:10.077987 kubelet[2775]: I0306 03:02:10.077870 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cc0a9e1-76b0-48dd-96bd-eb349bcb0204-lib-modules\") pod \"kube-proxy-pqhpv\" (UID: \"4cc0a9e1-76b0-48dd-96bd-eb349bcb0204\") " pod="kube-system/kube-proxy-pqhpv" Mar 6 03:02:10.077987 kubelet[2775]: I0306 03:02:10.077905 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv5tw\" (UniqueName: \"kubernetes.io/projected/4cc0a9e1-76b0-48dd-96bd-eb349bcb0204-kube-api-access-bv5tw\") pod \"kube-proxy-pqhpv\" (UID: \"4cc0a9e1-76b0-48dd-96bd-eb349bcb0204\") " pod="kube-system/kube-proxy-pqhpv" Mar 6 03:02:10.389833 containerd[1544]: time="2026-03-06T03:02:10.389774476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pqhpv,Uid:4cc0a9e1-76b0-48dd-96bd-eb349bcb0204,Namespace:kube-system,Attempt:0,}" Mar 6 03:02:10.415364 containerd[1544]: time="2026-03-06T03:02:10.415238774Z" level=info msg="connecting to shim 95d7aa799e25b431fdc17784d89ff2ec50a7180452a6ba3bdc97785531ccca39" address="unix:///run/containerd/s/e2e91230ebad732be877ccd53b06f53d7bca3938d42173b78b1f5081bbbdb9e0" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:10.458335 systemd[1]: Started cri-containerd-95d7aa799e25b431fdc17784d89ff2ec50a7180452a6ba3bdc97785531ccca39.scope - libcontainer container 95d7aa799e25b431fdc17784d89ff2ec50a7180452a6ba3bdc97785531ccca39. Mar 6 03:02:10.504151 containerd[1544]: time="2026-03-06T03:02:10.504021122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pqhpv,Uid:4cc0a9e1-76b0-48dd-96bd-eb349bcb0204,Namespace:kube-system,Attempt:0,} returns sandbox id \"95d7aa799e25b431fdc17784d89ff2ec50a7180452a6ba3bdc97785531ccca39\"" Mar 6 03:02:10.511785 containerd[1544]: time="2026-03-06T03:02:10.511704474Z" level=info msg="CreateContainer within sandbox \"95d7aa799e25b431fdc17784d89ff2ec50a7180452a6ba3bdc97785531ccca39\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 03:02:10.531812 containerd[1544]: time="2026-03-06T03:02:10.531141621Z" level=info msg="Container d555fc98845961eec25f98dded48f4c27da49efb579f5ae936f78b2bc50904a7: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:10.542372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383248060.mount: Deactivated successfully. Mar 6 03:02:10.547744 containerd[1544]: time="2026-03-06T03:02:10.547694608Z" level=info msg="CreateContainer within sandbox \"95d7aa799e25b431fdc17784d89ff2ec50a7180452a6ba3bdc97785531ccca39\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d555fc98845961eec25f98dded48f4c27da49efb579f5ae936f78b2bc50904a7\"" Mar 6 03:02:10.548589 containerd[1544]: time="2026-03-06T03:02:10.548534214Z" level=info msg="StartContainer for \"d555fc98845961eec25f98dded48f4c27da49efb579f5ae936f78b2bc50904a7\"" Mar 6 03:02:10.552107 containerd[1544]: time="2026-03-06T03:02:10.551868434Z" level=info msg="connecting to shim d555fc98845961eec25f98dded48f4c27da49efb579f5ae936f78b2bc50904a7" address="unix:///run/containerd/s/e2e91230ebad732be877ccd53b06f53d7bca3938d42173b78b1f5081bbbdb9e0" protocol=ttrpc version=3 Mar 6 03:02:10.585734 systemd[1]: Started cri-containerd-d555fc98845961eec25f98dded48f4c27da49efb579f5ae936f78b2bc50904a7.scope - libcontainer container d555fc98845961eec25f98dded48f4c27da49efb579f5ae936f78b2bc50904a7. Mar 6 03:02:10.668936 systemd[1]: Created slice kubepods-besteffort-podfa7be649_d9bc_45aa_9a60_57b49ba1ac81.slice - libcontainer container kubepods-besteffort-podfa7be649_d9bc_45aa_9a60_57b49ba1ac81.slice. Mar 6 03:02:10.681614 kubelet[2775]: I0306 03:02:10.681525 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqxbz\" (UniqueName: \"kubernetes.io/projected/fa7be649-d9bc-45aa-9a60-57b49ba1ac81-kube-api-access-hqxbz\") pod \"tigera-operator-6bf85f8dd-dckct\" (UID: \"fa7be649-d9bc-45aa-9a60-57b49ba1ac81\") " pod="tigera-operator/tigera-operator-6bf85f8dd-dckct" Mar 6 03:02:10.681614 kubelet[2775]: I0306 03:02:10.681565 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa7be649-d9bc-45aa-9a60-57b49ba1ac81-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-dckct\" (UID: \"fa7be649-d9bc-45aa-9a60-57b49ba1ac81\") " pod="tigera-operator/tigera-operator-6bf85f8dd-dckct" Mar 6 03:02:10.726979 containerd[1544]: time="2026-03-06T03:02:10.726918513Z" level=info msg="StartContainer for \"d555fc98845961eec25f98dded48f4c27da49efb579f5ae936f78b2bc50904a7\" returns successfully" Mar 6 03:02:10.978220 containerd[1544]: time="2026-03-06T03:02:10.977866913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-dckct,Uid:fa7be649-d9bc-45aa-9a60-57b49ba1ac81,Namespace:tigera-operator,Attempt:0,}" Mar 6 03:02:11.001897 containerd[1544]: time="2026-03-06T03:02:11.001832538Z" level=info msg="connecting to shim 8bad90b208c494ada7aa2e95a069717d167e0dd959e1f8a00ef5d1d7a3ab171e" address="unix:///run/containerd/s/4413cd81d693b4184d2c804d0ad2132ffd90aefcda278eb858b68a0fbdfaf45b" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:11.071667 systemd[1]: Started cri-containerd-8bad90b208c494ada7aa2e95a069717d167e0dd959e1f8a00ef5d1d7a3ab171e.scope - libcontainer container 8bad90b208c494ada7aa2e95a069717d167e0dd959e1f8a00ef5d1d7a3ab171e. Mar 6 03:02:11.168732 containerd[1544]: time="2026-03-06T03:02:11.168366414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-dckct,Uid:fa7be649-d9bc-45aa-9a60-57b49ba1ac81,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8bad90b208c494ada7aa2e95a069717d167e0dd959e1f8a00ef5d1d7a3ab171e\"" Mar 6 03:02:11.171579 containerd[1544]: time="2026-03-06T03:02:11.171394081Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 6 03:02:12.596043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount164163160.mount: Deactivated successfully. Mar 6 03:02:13.074095 kubelet[2775]: I0306 03:02:13.073943 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pqhpv" podStartSLOduration=3.073920169 podStartE2EDuration="3.073920169s" podCreationTimestamp="2026-03-06 03:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:02:11.048244277 +0000 UTC m=+6.330433581" watchObservedRunningTime="2026-03-06 03:02:13.073920169 +0000 UTC m=+8.356109473" Mar 6 03:02:14.383361 containerd[1544]: time="2026-03-06T03:02:14.383289326Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:14.384782 containerd[1544]: time="2026-03-06T03:02:14.384505025Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 6 03:02:14.385880 containerd[1544]: time="2026-03-06T03:02:14.385832275Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:14.388784 containerd[1544]: time="2026-03-06T03:02:14.388743074Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:14.390023 containerd[1544]: time="2026-03-06T03:02:14.389777772Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.218328327s" Mar 6 03:02:14.390023 containerd[1544]: time="2026-03-06T03:02:14.389821472Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 6 03:02:14.395895 containerd[1544]: time="2026-03-06T03:02:14.395372254Z" level=info msg="CreateContainer within sandbox \"8bad90b208c494ada7aa2e95a069717d167e0dd959e1f8a00ef5d1d7a3ab171e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 6 03:02:14.407111 containerd[1544]: time="2026-03-06T03:02:14.405255509Z" level=info msg="Container babdc3f8b69100608dbd425586b45ea3d86651d56436a983da95bafedbb47c9c: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:14.418337 containerd[1544]: time="2026-03-06T03:02:14.418286257Z" level=info msg="CreateContainer within sandbox \"8bad90b208c494ada7aa2e95a069717d167e0dd959e1f8a00ef5d1d7a3ab171e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"babdc3f8b69100608dbd425586b45ea3d86651d56436a983da95bafedbb47c9c\"" Mar 6 03:02:14.418956 containerd[1544]: time="2026-03-06T03:02:14.418892172Z" level=info msg="StartContainer for \"babdc3f8b69100608dbd425586b45ea3d86651d56436a983da95bafedbb47c9c\"" Mar 6 03:02:14.420801 containerd[1544]: time="2026-03-06T03:02:14.420760456Z" level=info msg="connecting to shim babdc3f8b69100608dbd425586b45ea3d86651d56436a983da95bafedbb47c9c" address="unix:///run/containerd/s/4413cd81d693b4184d2c804d0ad2132ffd90aefcda278eb858b68a0fbdfaf45b" protocol=ttrpc version=3 Mar 6 03:02:14.453366 systemd[1]: Started cri-containerd-babdc3f8b69100608dbd425586b45ea3d86651d56436a983da95bafedbb47c9c.scope - libcontainer container babdc3f8b69100608dbd425586b45ea3d86651d56436a983da95bafedbb47c9c. Mar 6 03:02:14.498204 containerd[1544]: time="2026-03-06T03:02:14.498136641Z" level=info msg="StartContainer for \"babdc3f8b69100608dbd425586b45ea3d86651d56436a983da95bafedbb47c9c\" returns successfully" Mar 6 03:02:15.865404 kubelet[2775]: I0306 03:02:15.865322 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-dckct" podStartSLOduration=2.644474629 podStartE2EDuration="5.865300519s" podCreationTimestamp="2026-03-06 03:02:10 +0000 UTC" firstStartedPulling="2026-03-06 03:02:11.170279063 +0000 UTC m=+6.452468340" lastFinishedPulling="2026-03-06 03:02:14.391104934 +0000 UTC m=+9.673294230" observedRunningTime="2026-03-06 03:02:15.051795832 +0000 UTC m=+10.333985151" watchObservedRunningTime="2026-03-06 03:02:15.865300519 +0000 UTC m=+11.147489822" Mar 6 03:02:16.122344 update_engine[1526]: I20260306 03:02:16.122164 1526 update_attempter.cc:509] Updating boot flags... Mar 6 03:02:22.102416 sudo[1834]: pam_unix(sudo:session): session closed for user root Mar 6 03:02:22.144851 sshd[1833]: Connection closed by 20.161.92.111 port 53176 Mar 6 03:02:22.146117 sshd-session[1830]: pam_unix(sshd:session): session closed for user core Mar 6 03:02:22.155679 systemd-logind[1518]: Session 7 logged out. Waiting for processes to exit. Mar 6 03:02:22.156868 systemd[1]: sshd@6-10.128.0.102:22-20.161.92.111:53176.service: Deactivated successfully. Mar 6 03:02:22.161971 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 03:02:22.162789 systemd[1]: session-7.scope: Consumed 8.093s CPU time, 231M memory peak. Mar 6 03:02:22.169795 systemd-logind[1518]: Removed session 7. Mar 6 03:02:26.492520 systemd[1]: Created slice kubepods-besteffort-podefcb7915_f263_49f7_805e_4811b2a5e0a8.slice - libcontainer container kubepods-besteffort-podefcb7915_f263_49f7_805e_4811b2a5e0a8.slice. Mar 6 03:02:26.590592 systemd[1]: Created slice kubepods-besteffort-pode4169c4d_a0fc_46cb_92ef_ef38629f915b.slice - libcontainer container kubepods-besteffort-pode4169c4d_a0fc_46cb_92ef_ef38629f915b.slice. Mar 6 03:02:26.603278 kubelet[2775]: I0306 03:02:26.603232 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efcb7915-f263-49f7-805e-4811b2a5e0a8-tigera-ca-bundle\") pod \"calico-typha-6c65965bd5-mp2xm\" (UID: \"efcb7915-f263-49f7-805e-4811b2a5e0a8\") " pod="calico-system/calico-typha-6c65965bd5-mp2xm" Mar 6 03:02:26.603817 kubelet[2775]: I0306 03:02:26.603288 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw2hm\" (UniqueName: \"kubernetes.io/projected/efcb7915-f263-49f7-805e-4811b2a5e0a8-kube-api-access-mw2hm\") pod \"calico-typha-6c65965bd5-mp2xm\" (UID: \"efcb7915-f263-49f7-805e-4811b2a5e0a8\") " pod="calico-system/calico-typha-6c65965bd5-mp2xm" Mar 6 03:02:26.603817 kubelet[2775]: I0306 03:02:26.603324 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/efcb7915-f263-49f7-805e-4811b2a5e0a8-typha-certs\") pod \"calico-typha-6c65965bd5-mp2xm\" (UID: \"efcb7915-f263-49f7-805e-4811b2a5e0a8\") " pod="calico-system/calico-typha-6c65965bd5-mp2xm" Mar 6 03:02:26.688540 kubelet[2775]: E0306 03:02:26.688066 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fkzvq" podUID="92f0c9a8-d6bf-42d2-b2a2-ef49488f9604" Mar 6 03:02:26.704180 kubelet[2775]: I0306 03:02:26.703733 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e4169c4d-a0fc-46cb-92ef-ef38629f915b-flexvol-driver-host\") pod \"calico-node-56tbf\" (UID: \"e4169c4d-a0fc-46cb-92ef-ef38629f915b\") " pod="calico-system/calico-node-56tbf" Mar 6 03:02:26.704180 kubelet[2775]: I0306 03:02:26.703780 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e4169c4d-a0fc-46cb-92ef-ef38629f915b-policysync\") pod \"calico-node-56tbf\" (UID: \"e4169c4d-a0fc-46cb-92ef-ef38629f915b\") " pod="calico-system/calico-node-56tbf" Mar 6 03:02:26.704180 kubelet[2775]: I0306 03:02:26.703807 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7stbn\" (UniqueName: \"kubernetes.io/projected/e4169c4d-a0fc-46cb-92ef-ef38629f915b-kube-api-access-7stbn\") pod \"calico-node-56tbf\" (UID: \"e4169c4d-a0fc-46cb-92ef-ef38629f915b\") " pod="calico-system/calico-node-56tbf" Mar 6 03:02:26.704180 kubelet[2775]: I0306 03:02:26.703852 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e4169c4d-a0fc-46cb-92ef-ef38629f915b-cni-bin-dir\") pod \"calico-node-56tbf\" (UID: \"e4169c4d-a0fc-46cb-92ef-ef38629f915b\") " pod="calico-system/calico-node-56tbf" Mar 6 03:02:26.704180 kubelet[2775]: I0306 03:02:26.703877 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e4169c4d-a0fc-46cb-92ef-ef38629f915b-cni-net-dir\") pod \"calico-node-56tbf\" (UID: \"e4169c4d-a0fc-46cb-92ef-ef38629f915b\") " pod="calico-system/calico-node-56tbf" Mar 6 03:02:26.704563 kubelet[2775]: I0306 03:02:26.703903 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/e4169c4d-a0fc-46cb-92ef-ef38629f915b-nodeproc\") pod \"calico-node-56tbf\" (UID: \"e4169c4d-a0fc-46cb-92ef-ef38629f915b\") " pod="calico-system/calico-node-56tbf" Mar 6 03:02:26.704563 kubelet[2775]: I0306 03:02:26.703943 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4169c4d-a0fc-46cb-92ef-ef38629f915b-tigera-ca-bundle\") pod \"calico-node-56tbf\" (UID: \"e4169c4d-a0fc-46cb-92ef-ef38629f915b\") " pod="calico-system/calico-node-56tbf" Mar 6 03:02:26.704563 kubelet[2775]: I0306 03:02:26.703982 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/e4169c4d-a0fc-46cb-92ef-ef38629f915b-sys-fs\") pod \"calico-node-56tbf\" (UID: \"e4169c4d-a0fc-46cb-92ef-ef38629f915b\") " pod="calico-system/calico-node-56tbf" Mar 6 03:02:26.704563 kubelet[2775]: I0306 03:02:26.704054 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e4169c4d-a0fc-46cb-92ef-ef38629f915b-var-lib-calico\") pod \"calico-node-56tbf\" (UID: \"e4169c4d-a0fc-46cb-92ef-ef38629f915b\") " pod="calico-system/calico-node-56tbf" Mar 6 03:02:26.705348 kubelet[2775]: I0306 03:02:26.705317 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e4169c4d-a0fc-46cb-92ef-ef38629f915b-var-run-calico\") pod \"calico-node-56tbf\" (UID: \"e4169c4d-a0fc-46cb-92ef-ef38629f915b\") " pod="calico-system/calico-node-56tbf" Mar 6 03:02:26.707491 kubelet[2775]: I0306 03:02:26.707444 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/e4169c4d-a0fc-46cb-92ef-ef38629f915b-bpffs\") pod \"calico-node-56tbf\" (UID: \"e4169c4d-a0fc-46cb-92ef-ef38629f915b\") " pod="calico-system/calico-node-56tbf" Mar 6 03:02:26.709309 kubelet[2775]: I0306 03:02:26.708543 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e4169c4d-a0fc-46cb-92ef-ef38629f915b-cni-log-dir\") pod \"calico-node-56tbf\" (UID: \"e4169c4d-a0fc-46cb-92ef-ef38629f915b\") " pod="calico-system/calico-node-56tbf" Mar 6 03:02:26.709309 kubelet[2775]: I0306 03:02:26.708716 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4169c4d-a0fc-46cb-92ef-ef38629f915b-lib-modules\") pod \"calico-node-56tbf\" (UID: \"e4169c4d-a0fc-46cb-92ef-ef38629f915b\") " pod="calico-system/calico-node-56tbf" Mar 6 03:02:26.709309 kubelet[2775]: I0306 03:02:26.708846 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e4169c4d-a0fc-46cb-92ef-ef38629f915b-node-certs\") pod \"calico-node-56tbf\" (UID: \"e4169c4d-a0fc-46cb-92ef-ef38629f915b\") " pod="calico-system/calico-node-56tbf" Mar 6 03:02:26.709554 kubelet[2775]: I0306 03:02:26.709068 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4169c4d-a0fc-46cb-92ef-ef38629f915b-xtables-lock\") pod \"calico-node-56tbf\" (UID: \"e4169c4d-a0fc-46cb-92ef-ef38629f915b\") " pod="calico-system/calico-node-56tbf" Mar 6 03:02:26.809132 containerd[1544]: time="2026-03-06T03:02:26.808762860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c65965bd5-mp2xm,Uid:efcb7915-f263-49f7-805e-4811b2a5e0a8,Namespace:calico-system,Attempt:0,}" Mar 6 03:02:26.812508 kubelet[2775]: I0306 03:02:26.812389 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/92f0c9a8-d6bf-42d2-b2a2-ef49488f9604-varrun\") pod \"csi-node-driver-fkzvq\" (UID: \"92f0c9a8-d6bf-42d2-b2a2-ef49488f9604\") " pod="calico-system/csi-node-driver-fkzvq" Mar 6 03:02:26.812508 kubelet[2775]: I0306 03:02:26.812475 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tkww\" (UniqueName: \"kubernetes.io/projected/92f0c9a8-d6bf-42d2-b2a2-ef49488f9604-kube-api-access-2tkww\") pod \"csi-node-driver-fkzvq\" (UID: \"92f0c9a8-d6bf-42d2-b2a2-ef49488f9604\") " pod="calico-system/csi-node-driver-fkzvq" Mar 6 03:02:26.812870 kubelet[2775]: I0306 03:02:26.812846 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92f0c9a8-d6bf-42d2-b2a2-ef49488f9604-kubelet-dir\") pod \"csi-node-driver-fkzvq\" (UID: \"92f0c9a8-d6bf-42d2-b2a2-ef49488f9604\") " pod="calico-system/csi-node-driver-fkzvq" Mar 6 03:02:26.813584 kubelet[2775]: I0306 03:02:26.812942 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/92f0c9a8-d6bf-42d2-b2a2-ef49488f9604-registration-dir\") pod \"csi-node-driver-fkzvq\" (UID: \"92f0c9a8-d6bf-42d2-b2a2-ef49488f9604\") " pod="calico-system/csi-node-driver-fkzvq" Mar 6 03:02:26.813584 kubelet[2775]: I0306 03:02:26.813135 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/92f0c9a8-d6bf-42d2-b2a2-ef49488f9604-socket-dir\") pod \"csi-node-driver-fkzvq\" (UID: \"92f0c9a8-d6bf-42d2-b2a2-ef49488f9604\") " pod="calico-system/csi-node-driver-fkzvq" Mar 6 03:02:26.815218 kubelet[2775]: E0306 03:02:26.815189 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.815416 kubelet[2775]: W0306 03:02:26.815218 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.815416 kubelet[2775]: E0306 03:02:26.815242 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.816242 kubelet[2775]: E0306 03:02:26.816216 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.816242 kubelet[2775]: W0306 03:02:26.816237 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.817132 kubelet[2775]: E0306 03:02:26.816256 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.817132 kubelet[2775]: E0306 03:02:26.816644 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.817132 kubelet[2775]: W0306 03:02:26.816659 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.817132 kubelet[2775]: E0306 03:02:26.816676 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.817657 kubelet[2775]: E0306 03:02:26.817634 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.818017 kubelet[2775]: W0306 03:02:26.817656 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.818017 kubelet[2775]: E0306 03:02:26.817779 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.819060 kubelet[2775]: E0306 03:02:26.819037 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.819060 kubelet[2775]: W0306 03:02:26.819059 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.819296 kubelet[2775]: E0306 03:02:26.819094 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.822150 kubelet[2775]: E0306 03:02:26.822118 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.822340 kubelet[2775]: W0306 03:02:26.822271 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.822340 kubelet[2775]: E0306 03:02:26.822294 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.822880 kubelet[2775]: E0306 03:02:26.822863 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.823060 kubelet[2775]: W0306 03:02:26.822996 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.823060 kubelet[2775]: E0306 03:02:26.823017 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.823769 kubelet[2775]: E0306 03:02:26.823701 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.823769 kubelet[2775]: W0306 03:02:26.823736 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.823769 kubelet[2775]: E0306 03:02:26.823752 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.826187 kubelet[2775]: E0306 03:02:26.826164 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.826677 kubelet[2775]: W0306 03:02:26.826516 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.826677 kubelet[2775]: E0306 03:02:26.826545 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.827120 kubelet[2775]: E0306 03:02:26.827016 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.827120 kubelet[2775]: W0306 03:02:26.827033 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.827120 kubelet[2775]: E0306 03:02:26.827049 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.828279 kubelet[2775]: E0306 03:02:26.828107 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.828279 kubelet[2775]: W0306 03:02:26.828125 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.828279 kubelet[2775]: E0306 03:02:26.828142 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.828783 kubelet[2775]: E0306 03:02:26.828729 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.828783 kubelet[2775]: W0306 03:02:26.828747 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.828783 kubelet[2775]: E0306 03:02:26.828762 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.833098 kubelet[2775]: E0306 03:02:26.832486 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.833098 kubelet[2775]: W0306 03:02:26.832505 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.833098 kubelet[2775]: E0306 03:02:26.832522 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.836308 kubelet[2775]: E0306 03:02:26.836244 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.836308 kubelet[2775]: W0306 03:02:26.836263 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.836308 kubelet[2775]: E0306 03:02:26.836280 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.837257 kubelet[2775]: E0306 03:02:26.837189 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.837257 kubelet[2775]: W0306 03:02:26.837211 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.837257 kubelet[2775]: E0306 03:02:26.837233 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.839419 kubelet[2775]: E0306 03:02:26.839362 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.839419 kubelet[2775]: W0306 03:02:26.839381 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.839419 kubelet[2775]: E0306 03:02:26.839399 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.840541 kubelet[2775]: E0306 03:02:26.840483 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.840541 kubelet[2775]: W0306 03:02:26.840503 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.840541 kubelet[2775]: E0306 03:02:26.840521 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.841215 kubelet[2775]: E0306 03:02:26.841194 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.841374 kubelet[2775]: W0306 03:02:26.841329 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.841374 kubelet[2775]: E0306 03:02:26.841356 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.841970 kubelet[2775]: E0306 03:02:26.841900 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.841970 kubelet[2775]: W0306 03:02:26.841921 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.841970 kubelet[2775]: E0306 03:02:26.841938 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.842996 kubelet[2775]: E0306 03:02:26.842977 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.843157 kubelet[2775]: W0306 03:02:26.843115 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.843157 kubelet[2775]: E0306 03:02:26.843138 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.843694 kubelet[2775]: E0306 03:02:26.843660 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.843971 kubelet[2775]: W0306 03:02:26.843677 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.843971 kubelet[2775]: E0306 03:02:26.843804 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.847721 kubelet[2775]: E0306 03:02:26.847664 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.847721 kubelet[2775]: W0306 03:02:26.847683 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.847721 kubelet[2775]: E0306 03:02:26.847700 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.849439 kubelet[2775]: E0306 03:02:26.849383 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.849439 kubelet[2775]: W0306 03:02:26.849404 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.849439 kubelet[2775]: E0306 03:02:26.849420 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.851241 kubelet[2775]: E0306 03:02:26.851187 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.851241 kubelet[2775]: W0306 03:02:26.851206 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.851241 kubelet[2775]: E0306 03:02:26.851222 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.854951 kubelet[2775]: E0306 03:02:26.854841 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.854951 kubelet[2775]: W0306 03:02:26.854861 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.854951 kubelet[2775]: E0306 03:02:26.854880 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.855965 kubelet[2775]: E0306 03:02:26.855909 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.856142 kubelet[2775]: W0306 03:02:26.856123 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.856331 kubelet[2775]: E0306 03:02:26.856249 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.856718 kubelet[2775]: E0306 03:02:26.856664 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.856718 kubelet[2775]: W0306 03:02:26.856681 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.856718 kubelet[2775]: E0306 03:02:26.856697 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.857524 kubelet[2775]: E0306 03:02:26.857461 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.857524 kubelet[2775]: W0306 03:02:26.857481 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.857524 kubelet[2775]: E0306 03:02:26.857497 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.858321 kubelet[2775]: E0306 03:02:26.858192 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.858321 kubelet[2775]: W0306 03:02:26.858233 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.858321 kubelet[2775]: E0306 03:02:26.858250 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.858907 kubelet[2775]: E0306 03:02:26.858852 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.859155 kubelet[2775]: W0306 03:02:26.859008 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.859155 kubelet[2775]: E0306 03:02:26.859034 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.859730 kubelet[2775]: E0306 03:02:26.859685 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.859929 kubelet[2775]: W0306 03:02:26.859832 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.859929 kubelet[2775]: E0306 03:02:26.859857 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.860508 kubelet[2775]: E0306 03:02:26.860454 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.860508 kubelet[2775]: W0306 03:02:26.860473 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.860508 kubelet[2775]: E0306 03:02:26.860489 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.861105 kubelet[2775]: E0306 03:02:26.861059 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.861254 kubelet[2775]: W0306 03:02:26.861200 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.861254 kubelet[2775]: E0306 03:02:26.861224 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.861690 kubelet[2775]: E0306 03:02:26.861641 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.861690 kubelet[2775]: W0306 03:02:26.861659 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.861690 kubelet[2775]: E0306 03:02:26.861673 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.862388 kubelet[2775]: E0306 03:02:26.862329 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.862388 kubelet[2775]: W0306 03:02:26.862351 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.862388 kubelet[2775]: E0306 03:02:26.862370 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.863030 kubelet[2775]: E0306 03:02:26.862994 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.863030 kubelet[2775]: W0306 03:02:26.863012 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.863307 kubelet[2775]: E0306 03:02:26.863049 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.863502 kubelet[2775]: E0306 03:02:26.863483 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.863502 kubelet[2775]: W0306 03:02:26.863502 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.863653 kubelet[2775]: E0306 03:02:26.863543 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.864140 kubelet[2775]: E0306 03:02:26.864105 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.864140 kubelet[2775]: W0306 03:02:26.864134 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.864404 kubelet[2775]: E0306 03:02:26.864150 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.864810 kubelet[2775]: E0306 03:02:26.864787 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.864810 kubelet[2775]: W0306 03:02:26.864809 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.865011 kubelet[2775]: E0306 03:02:26.864889 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.865853 kubelet[2775]: E0306 03:02:26.865829 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.865853 kubelet[2775]: W0306 03:02:26.865852 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.866023 kubelet[2775]: E0306 03:02:26.865871 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.866300 kubelet[2775]: E0306 03:02:26.866277 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.866300 kubelet[2775]: W0306 03:02:26.866298 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.866451 kubelet[2775]: E0306 03:02:26.866315 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.866954 kubelet[2775]: E0306 03:02:26.866925 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.866954 kubelet[2775]: W0306 03:02:26.866946 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.867133 kubelet[2775]: E0306 03:02:26.866963 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.867694 kubelet[2775]: E0306 03:02:26.867659 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.868013 kubelet[2775]: W0306 03:02:26.867684 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.868013 kubelet[2775]: E0306 03:02:26.867766 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.870359 kubelet[2775]: E0306 03:02:26.869865 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.870359 kubelet[2775]: W0306 03:02:26.869884 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.870359 kubelet[2775]: E0306 03:02:26.869905 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.870573 kubelet[2775]: E0306 03:02:26.870519 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.870573 kubelet[2775]: W0306 03:02:26.870534 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.870573 kubelet[2775]: E0306 03:02:26.870551 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.871174 kubelet[2775]: E0306 03:02:26.871148 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.871174 kubelet[2775]: W0306 03:02:26.871173 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.871637 kubelet[2775]: E0306 03:02:26.871217 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.872759 containerd[1544]: time="2026-03-06T03:02:26.872532966Z" level=info msg="connecting to shim 16273b26010a37f9c1b0ae671a20a4ffc4e83fc955e97f1c2111eb1ffb4400a1" address="unix:///run/containerd/s/5b76aa5476dcb0e729a0d4540f403e71ea2059c4031efe6a0473decf01a1bfbd" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:26.872864 kubelet[2775]: E0306 03:02:26.872668 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.872864 kubelet[2775]: W0306 03:02:26.872683 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.872864 kubelet[2775]: E0306 03:02:26.872713 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.875284 kubelet[2775]: E0306 03:02:26.874101 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.875284 kubelet[2775]: W0306 03:02:26.874121 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.875284 kubelet[2775]: E0306 03:02:26.874138 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.875284 kubelet[2775]: E0306 03:02:26.875155 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.875284 kubelet[2775]: W0306 03:02:26.875173 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.875284 kubelet[2775]: E0306 03:02:26.875192 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.875624 kubelet[2775]: E0306 03:02:26.875462 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.875624 kubelet[2775]: W0306 03:02:26.875474 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.875624 kubelet[2775]: E0306 03:02:26.875489 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.877215 kubelet[2775]: E0306 03:02:26.877191 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.877215 kubelet[2775]: W0306 03:02:26.877215 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.877368 kubelet[2775]: E0306 03:02:26.877232 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.879430 kubelet[2775]: E0306 03:02:26.879406 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.879430 kubelet[2775]: W0306 03:02:26.879429 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.879578 kubelet[2775]: E0306 03:02:26.879445 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.888224 kubelet[2775]: E0306 03:02:26.888202 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.888342 kubelet[2775]: W0306 03:02:26.888325 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.888533 kubelet[2775]: E0306 03:02:26.888487 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.897818 containerd[1544]: time="2026-03-06T03:02:26.897771750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-56tbf,Uid:e4169c4d-a0fc-46cb-92ef-ef38629f915b,Namespace:calico-system,Attempt:0,}" Mar 6 03:02:26.915701 kubelet[2775]: E0306 03:02:26.915268 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.915701 kubelet[2775]: W0306 03:02:26.915298 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.915701 kubelet[2775]: E0306 03:02:26.915321 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.916252 kubelet[2775]: E0306 03:02:26.915918 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.916252 kubelet[2775]: W0306 03:02:26.915942 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.916252 kubelet[2775]: E0306 03:02:26.915962 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.916691 kubelet[2775]: E0306 03:02:26.916391 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.916691 kubelet[2775]: W0306 03:02:26.916425 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.916691 kubelet[2775]: E0306 03:02:26.916442 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.917114 kubelet[2775]: E0306 03:02:26.917014 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.917114 kubelet[2775]: W0306 03:02:26.917033 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.917114 kubelet[2775]: E0306 03:02:26.917049 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.918183 kubelet[2775]: E0306 03:02:26.918127 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.918183 kubelet[2775]: W0306 03:02:26.918146 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.918183 kubelet[2775]: E0306 03:02:26.918162 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.919900 kubelet[2775]: E0306 03:02:26.919741 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.920142 kubelet[2775]: W0306 03:02:26.920039 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.920142 kubelet[2775]: E0306 03:02:26.920064 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.920524 systemd[1]: Started cri-containerd-16273b26010a37f9c1b0ae671a20a4ffc4e83fc955e97f1c2111eb1ffb4400a1.scope - libcontainer container 16273b26010a37f9c1b0ae671a20a4ffc4e83fc955e97f1c2111eb1ffb4400a1. Mar 6 03:02:26.922391 kubelet[2775]: E0306 03:02:26.922261 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.922391 kubelet[2775]: W0306 03:02:26.922280 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.922391 kubelet[2775]: E0306 03:02:26.922297 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.923583 kubelet[2775]: E0306 03:02:26.923564 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.923899 kubelet[2775]: W0306 03:02:26.923751 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.923899 kubelet[2775]: E0306 03:02:26.923776 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.924679 kubelet[2775]: E0306 03:02:26.924633 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.924775 kubelet[2775]: W0306 03:02:26.924676 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.924775 kubelet[2775]: E0306 03:02:26.924704 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.925245 kubelet[2775]: E0306 03:02:26.925209 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.925245 kubelet[2775]: W0306 03:02:26.925240 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.925387 kubelet[2775]: E0306 03:02:26.925257 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.925905 kubelet[2775]: E0306 03:02:26.925881 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.926011 kubelet[2775]: W0306 03:02:26.925915 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.926011 kubelet[2775]: E0306 03:02:26.925932 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.926536 kubelet[2775]: E0306 03:02:26.926502 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.926675 kubelet[2775]: W0306 03:02:26.926525 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.926675 kubelet[2775]: E0306 03:02:26.926576 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.927223 kubelet[2775]: E0306 03:02:26.927119 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.927223 kubelet[2775]: W0306 03:02:26.927134 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.927223 kubelet[2775]: E0306 03:02:26.927149 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.927853 kubelet[2775]: E0306 03:02:26.927829 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.927853 kubelet[2775]: W0306 03:02:26.927852 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.928008 kubelet[2775]: E0306 03:02:26.927868 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.928847 kubelet[2775]: E0306 03:02:26.928416 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.928847 kubelet[2775]: W0306 03:02:26.928456 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.928847 kubelet[2775]: E0306 03:02:26.928474 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.929060 kubelet[2775]: E0306 03:02:26.928931 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.929060 kubelet[2775]: W0306 03:02:26.928955 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.929060 kubelet[2775]: E0306 03:02:26.928973 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.929517 kubelet[2775]: E0306 03:02:26.929490 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.929517 kubelet[2775]: W0306 03:02:26.929512 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.929651 kubelet[2775]: E0306 03:02:26.929527 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.930004 kubelet[2775]: E0306 03:02:26.929976 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.930111 kubelet[2775]: W0306 03:02:26.930024 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.930111 kubelet[2775]: E0306 03:02:26.930042 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.930912 kubelet[2775]: E0306 03:02:26.930807 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.930912 kubelet[2775]: W0306 03:02:26.930824 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.930912 kubelet[2775]: E0306 03:02:26.930839 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.931741 kubelet[2775]: E0306 03:02:26.931580 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.931741 kubelet[2775]: W0306 03:02:26.931599 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.931741 kubelet[2775]: E0306 03:02:26.931616 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.933319 kubelet[2775]: E0306 03:02:26.933286 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.933474 kubelet[2775]: W0306 03:02:26.933437 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.934099 kubelet[2775]: E0306 03:02:26.933463 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.934517 kubelet[2775]: E0306 03:02:26.934484 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.934517 kubelet[2775]: W0306 03:02:26.934514 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.934642 kubelet[2775]: E0306 03:02:26.934531 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.934869 kubelet[2775]: E0306 03:02:26.934849 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.934948 kubelet[2775]: W0306 03:02:26.934890 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.934948 kubelet[2775]: E0306 03:02:26.934908 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.935556 kubelet[2775]: E0306 03:02:26.935430 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.935556 kubelet[2775]: W0306 03:02:26.935555 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.935716 kubelet[2775]: E0306 03:02:26.935573 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.938125 kubelet[2775]: E0306 03:02:26.938100 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.938125 kubelet[2775]: W0306 03:02:26.938123 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.938273 kubelet[2775]: E0306 03:02:26.938140 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.951278 containerd[1544]: time="2026-03-06T03:02:26.950946361Z" level=info msg="connecting to shim 485c4defbb810eeceb2e55f09bcfca25a1d8174c839392d2e7ac4bc410db0055" address="unix:///run/containerd/s/6d4a85d20b847f883a50cc54a74d84079ee75e3e219dfdc2f4a9bf2cc3e9f56f" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:26.960500 kubelet[2775]: E0306 03:02:26.960390 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 03:02:26.960500 kubelet[2775]: W0306 03:02:26.960420 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 03:02:26.960500 kubelet[2775]: E0306 03:02:26.960446 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 03:02:26.995540 systemd[1]: Started cri-containerd-485c4defbb810eeceb2e55f09bcfca25a1d8174c839392d2e7ac4bc410db0055.scope - libcontainer container 485c4defbb810eeceb2e55f09bcfca25a1d8174c839392d2e7ac4bc410db0055. Mar 6 03:02:27.051616 containerd[1544]: time="2026-03-06T03:02:27.051549409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-56tbf,Uid:e4169c4d-a0fc-46cb-92ef-ef38629f915b,Namespace:calico-system,Attempt:0,} returns sandbox id \"485c4defbb810eeceb2e55f09bcfca25a1d8174c839392d2e7ac4bc410db0055\"" Mar 6 03:02:27.057588 containerd[1544]: time="2026-03-06T03:02:27.057529221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 6 03:02:27.058475 containerd[1544]: time="2026-03-06T03:02:27.058418272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c65965bd5-mp2xm,Uid:efcb7915-f263-49f7-805e-4811b2a5e0a8,Namespace:calico-system,Attempt:0,} returns sandbox id \"16273b26010a37f9c1b0ae671a20a4ffc4e83fc955e97f1c2111eb1ffb4400a1\"" Mar 6 03:02:27.945798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount258502727.mount: Deactivated successfully. Mar 6 03:02:27.967592 kubelet[2775]: E0306 03:02:27.967529 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fkzvq" podUID="92f0c9a8-d6bf-42d2-b2a2-ef49488f9604" Mar 6 03:02:28.075640 containerd[1544]: time="2026-03-06T03:02:28.075571080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:28.076963 containerd[1544]: time="2026-03-06T03:02:28.076768200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 6 03:02:28.078276 containerd[1544]: time="2026-03-06T03:02:28.078235404Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:28.082816 containerd[1544]: time="2026-03-06T03:02:28.082738083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:28.084145 containerd[1544]: time="2026-03-06T03:02:28.083713203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.025932709s" Mar 6 03:02:28.084145 containerd[1544]: time="2026-03-06T03:02:28.083758780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 6 03:02:28.085512 containerd[1544]: time="2026-03-06T03:02:28.085480924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 6 03:02:28.090441 containerd[1544]: time="2026-03-06T03:02:28.090405439Z" level=info msg="CreateContainer within sandbox \"485c4defbb810eeceb2e55f09bcfca25a1d8174c839392d2e7ac4bc410db0055\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 6 03:02:28.105111 containerd[1544]: time="2026-03-06T03:02:28.101703568Z" level=info msg="Container 9902ad58edca5217aa511f6b1a9cd585198925f9f198291c50e8f072c533cb95: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:28.113916 containerd[1544]: time="2026-03-06T03:02:28.113862481Z" level=info msg="CreateContainer within sandbox \"485c4defbb810eeceb2e55f09bcfca25a1d8174c839392d2e7ac4bc410db0055\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9902ad58edca5217aa511f6b1a9cd585198925f9f198291c50e8f072c533cb95\"" Mar 6 03:02:28.114799 containerd[1544]: time="2026-03-06T03:02:28.114702966Z" level=info msg="StartContainer for \"9902ad58edca5217aa511f6b1a9cd585198925f9f198291c50e8f072c533cb95\"" Mar 6 03:02:28.117424 containerd[1544]: time="2026-03-06T03:02:28.117377087Z" level=info msg="connecting to shim 9902ad58edca5217aa511f6b1a9cd585198925f9f198291c50e8f072c533cb95" address="unix:///run/containerd/s/6d4a85d20b847f883a50cc54a74d84079ee75e3e219dfdc2f4a9bf2cc3e9f56f" protocol=ttrpc version=3 Mar 6 03:02:28.149448 systemd[1]: Started cri-containerd-9902ad58edca5217aa511f6b1a9cd585198925f9f198291c50e8f072c533cb95.scope - libcontainer container 9902ad58edca5217aa511f6b1a9cd585198925f9f198291c50e8f072c533cb95. Mar 6 03:02:28.246120 containerd[1544]: time="2026-03-06T03:02:28.245811867Z" level=info msg="StartContainer for \"9902ad58edca5217aa511f6b1a9cd585198925f9f198291c50e8f072c533cb95\" returns successfully" Mar 6 03:02:28.263573 systemd[1]: cri-containerd-9902ad58edca5217aa511f6b1a9cd585198925f9f198291c50e8f072c533cb95.scope: Deactivated successfully. Mar 6 03:02:28.270760 containerd[1544]: time="2026-03-06T03:02:28.270707781Z" level=info msg="received container exit event container_id:\"9902ad58edca5217aa511f6b1a9cd585198925f9f198291c50e8f072c533cb95\" id:\"9902ad58edca5217aa511f6b1a9cd585198925f9f198291c50e8f072c533cb95\" pid:3404 exited_at:{seconds:1772766148 nanos:270305936}" Mar 6 03:02:28.735518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9902ad58edca5217aa511f6b1a9cd585198925f9f198291c50e8f072c533cb95-rootfs.mount: Deactivated successfully. Mar 6 03:02:29.967446 kubelet[2775]: E0306 03:02:29.967340 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fkzvq" podUID="92f0c9a8-d6bf-42d2-b2a2-ef49488f9604" Mar 6 03:02:30.706533 containerd[1544]: time="2026-03-06T03:02:30.706471101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:30.708195 containerd[1544]: time="2026-03-06T03:02:30.707943070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 6 03:02:30.709387 containerd[1544]: time="2026-03-06T03:02:30.709346142Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:30.712455 containerd[1544]: time="2026-03-06T03:02:30.712420531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:30.713960 containerd[1544]: time="2026-03-06T03:02:30.713346494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.627815249s" Mar 6 03:02:30.713960 containerd[1544]: time="2026-03-06T03:02:30.713388958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 6 03:02:30.715965 containerd[1544]: time="2026-03-06T03:02:30.715933252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 6 03:02:30.738517 containerd[1544]: time="2026-03-06T03:02:30.738474739Z" level=info msg="CreateContainer within sandbox \"16273b26010a37f9c1b0ae671a20a4ffc4e83fc955e97f1c2111eb1ffb4400a1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 6 03:02:30.748219 containerd[1544]: time="2026-03-06T03:02:30.748180075Z" level=info msg="Container 118f29bd698f42f472a37019cd910262d2680b684e22a02b4b3cf9d780d9efd9: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:30.765507 containerd[1544]: time="2026-03-06T03:02:30.765446819Z" level=info msg="CreateContainer within sandbox \"16273b26010a37f9c1b0ae671a20a4ffc4e83fc955e97f1c2111eb1ffb4400a1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"118f29bd698f42f472a37019cd910262d2680b684e22a02b4b3cf9d780d9efd9\"" Mar 6 03:02:30.766312 containerd[1544]: time="2026-03-06T03:02:30.766274818Z" level=info msg="StartContainer for \"118f29bd698f42f472a37019cd910262d2680b684e22a02b4b3cf9d780d9efd9\"" Mar 6 03:02:30.767890 containerd[1544]: time="2026-03-06T03:02:30.767836766Z" level=info msg="connecting to shim 118f29bd698f42f472a37019cd910262d2680b684e22a02b4b3cf9d780d9efd9" address="unix:///run/containerd/s/5b76aa5476dcb0e729a0d4540f403e71ea2059c4031efe6a0473decf01a1bfbd" protocol=ttrpc version=3 Mar 6 03:02:30.805302 systemd[1]: Started cri-containerd-118f29bd698f42f472a37019cd910262d2680b684e22a02b4b3cf9d780d9efd9.scope - libcontainer container 118f29bd698f42f472a37019cd910262d2680b684e22a02b4b3cf9d780d9efd9. Mar 6 03:02:30.889435 containerd[1544]: time="2026-03-06T03:02:30.889376826Z" level=info msg="StartContainer for \"118f29bd698f42f472a37019cd910262d2680b684e22a02b4b3cf9d780d9efd9\" returns successfully" Mar 6 03:02:31.970121 kubelet[2775]: E0306 03:02:31.967802 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fkzvq" podUID="92f0c9a8-d6bf-42d2-b2a2-ef49488f9604" Mar 6 03:02:32.104102 kubelet[2775]: I0306 03:02:32.103313 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 03:02:33.967612 kubelet[2775]: E0306 03:02:33.967544 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fkzvq" podUID="92f0c9a8-d6bf-42d2-b2a2-ef49488f9604" Mar 6 03:02:35.967808 kubelet[2775]: E0306 03:02:35.967483 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fkzvq" podUID="92f0c9a8-d6bf-42d2-b2a2-ef49488f9604" Mar 6 03:02:37.489542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131635976.mount: Deactivated successfully. Mar 6 03:02:37.525369 containerd[1544]: time="2026-03-06T03:02:37.525288675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:37.528112 containerd[1544]: time="2026-03-06T03:02:37.527373460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 6 03:02:37.529987 containerd[1544]: time="2026-03-06T03:02:37.529948892Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:37.535527 containerd[1544]: time="2026-03-06T03:02:37.535491869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:37.536235 containerd[1544]: time="2026-03-06T03:02:37.536193227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.820192977s" Mar 6 03:02:37.536348 containerd[1544]: time="2026-03-06T03:02:37.536239277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 6 03:02:37.542992 containerd[1544]: time="2026-03-06T03:02:37.542912201Z" level=info msg="CreateContainer within sandbox \"485c4defbb810eeceb2e55f09bcfca25a1d8174c839392d2e7ac4bc410db0055\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 6 03:02:37.557291 containerd[1544]: time="2026-03-06T03:02:37.557244443Z" level=info msg="Container 2e45849149f423886ad58995a914c9ef02ac480f3e186066ce458bdd03f8ba7e: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:37.576117 containerd[1544]: time="2026-03-06T03:02:37.576062976Z" level=info msg="CreateContainer within sandbox \"485c4defbb810eeceb2e55f09bcfca25a1d8174c839392d2e7ac4bc410db0055\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"2e45849149f423886ad58995a914c9ef02ac480f3e186066ce458bdd03f8ba7e\"" Mar 6 03:02:37.576905 containerd[1544]: time="2026-03-06T03:02:37.576794353Z" level=info msg="StartContainer for \"2e45849149f423886ad58995a914c9ef02ac480f3e186066ce458bdd03f8ba7e\"" Mar 6 03:02:37.579065 containerd[1544]: time="2026-03-06T03:02:37.579030936Z" level=info msg="connecting to shim 2e45849149f423886ad58995a914c9ef02ac480f3e186066ce458bdd03f8ba7e" address="unix:///run/containerd/s/6d4a85d20b847f883a50cc54a74d84079ee75e3e219dfdc2f4a9bf2cc3e9f56f" protocol=ttrpc version=3 Mar 6 03:02:37.616311 systemd[1]: Started cri-containerd-2e45849149f423886ad58995a914c9ef02ac480f3e186066ce458bdd03f8ba7e.scope - libcontainer container 2e45849149f423886ad58995a914c9ef02ac480f3e186066ce458bdd03f8ba7e. Mar 6 03:02:37.725456 containerd[1544]: time="2026-03-06T03:02:37.725331506Z" level=info msg="StartContainer for \"2e45849149f423886ad58995a914c9ef02ac480f3e186066ce458bdd03f8ba7e\" returns successfully" Mar 6 03:02:37.790576 systemd[1]: cri-containerd-2e45849149f423886ad58995a914c9ef02ac480f3e186066ce458bdd03f8ba7e.scope: Deactivated successfully. Mar 6 03:02:37.799774 containerd[1544]: time="2026-03-06T03:02:37.798619300Z" level=info msg="received container exit event container_id:\"2e45849149f423886ad58995a914c9ef02ac480f3e186066ce458bdd03f8ba7e\" id:\"2e45849149f423886ad58995a914c9ef02ac480f3e186066ce458bdd03f8ba7e\" pid:3499 exited_at:{seconds:1772766157 nanos:797013949}" Mar 6 03:02:37.967100 kubelet[2775]: E0306 03:02:37.967012 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fkzvq" podUID="92f0c9a8-d6bf-42d2-b2a2-ef49488f9604" Mar 6 03:02:38.146293 kubelet[2775]: I0306 03:02:38.145968 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c65965bd5-mp2xm" podStartSLOduration=8.49207219 podStartE2EDuration="12.145925868s" podCreationTimestamp="2026-03-06 03:02:26 +0000 UTC" firstStartedPulling="2026-03-06 03:02:27.060754866 +0000 UTC m=+22.342944145" lastFinishedPulling="2026-03-06 03:02:30.714608531 +0000 UTC m=+25.996797823" observedRunningTime="2026-03-06 03:02:31.161354982 +0000 UTC m=+26.443544285" watchObservedRunningTime="2026-03-06 03:02:38.145925868 +0000 UTC m=+33.428115171" Mar 6 03:02:38.487506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e45849149f423886ad58995a914c9ef02ac480f3e186066ce458bdd03f8ba7e-rootfs.mount: Deactivated successfully. Mar 6 03:02:39.967429 kubelet[2775]: E0306 03:02:39.967360 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fkzvq" podUID="92f0c9a8-d6bf-42d2-b2a2-ef49488f9604" Mar 6 03:02:40.137915 containerd[1544]: time="2026-03-06T03:02:40.137848552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 6 03:02:41.791056 kubelet[2775]: I0306 03:02:41.790971 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 03:02:41.968009 kubelet[2775]: E0306 03:02:41.967260 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fkzvq" podUID="92f0c9a8-d6bf-42d2-b2a2-ef49488f9604" Mar 6 03:02:43.225006 containerd[1544]: time="2026-03-06T03:02:43.224938394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:43.226429 containerd[1544]: time="2026-03-06T03:02:43.226372696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 6 03:02:43.227814 containerd[1544]: time="2026-03-06T03:02:43.227748331Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:43.230733 containerd[1544]: time="2026-03-06T03:02:43.230672428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:43.231836 containerd[1544]: time="2026-03-06T03:02:43.231679178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.093779149s" Mar 6 03:02:43.231836 containerd[1544]: time="2026-03-06T03:02:43.231719247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 6 03:02:43.236707 containerd[1544]: time="2026-03-06T03:02:43.236654886Z" level=info msg="CreateContainer within sandbox \"485c4defbb810eeceb2e55f09bcfca25a1d8174c839392d2e7ac4bc410db0055\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 6 03:02:43.248112 containerd[1544]: time="2026-03-06T03:02:43.247131613Z" level=info msg="Container 54002faa2b05b24e07dd03175f6ee6e65133e5ce168e19c4bda19a321213084d: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:43.263532 containerd[1544]: time="2026-03-06T03:02:43.263478921Z" level=info msg="CreateContainer within sandbox \"485c4defbb810eeceb2e55f09bcfca25a1d8174c839392d2e7ac4bc410db0055\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"54002faa2b05b24e07dd03175f6ee6e65133e5ce168e19c4bda19a321213084d\"" Mar 6 03:02:43.264032 containerd[1544]: time="2026-03-06T03:02:43.263994049Z" level=info msg="StartContainer for \"54002faa2b05b24e07dd03175f6ee6e65133e5ce168e19c4bda19a321213084d\"" Mar 6 03:02:43.267641 containerd[1544]: time="2026-03-06T03:02:43.267589395Z" level=info msg="connecting to shim 54002faa2b05b24e07dd03175f6ee6e65133e5ce168e19c4bda19a321213084d" address="unix:///run/containerd/s/6d4a85d20b847f883a50cc54a74d84079ee75e3e219dfdc2f4a9bf2cc3e9f56f" protocol=ttrpc version=3 Mar 6 03:02:43.298337 systemd[1]: Started cri-containerd-54002faa2b05b24e07dd03175f6ee6e65133e5ce168e19c4bda19a321213084d.scope - libcontainer container 54002faa2b05b24e07dd03175f6ee6e65133e5ce168e19c4bda19a321213084d. Mar 6 03:02:43.403130 containerd[1544]: time="2026-03-06T03:02:43.403066678Z" level=info msg="StartContainer for \"54002faa2b05b24e07dd03175f6ee6e65133e5ce168e19c4bda19a321213084d\" returns successfully" Mar 6 03:02:43.967340 kubelet[2775]: E0306 03:02:43.967254 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fkzvq" podUID="92f0c9a8-d6bf-42d2-b2a2-ef49488f9604" Mar 6 03:02:44.448758 containerd[1544]: time="2026-03-06T03:02:44.448689029Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 03:02:44.452357 systemd[1]: cri-containerd-54002faa2b05b24e07dd03175f6ee6e65133e5ce168e19c4bda19a321213084d.scope: Deactivated successfully. Mar 6 03:02:44.453403 systemd[1]: cri-containerd-54002faa2b05b24e07dd03175f6ee6e65133e5ce168e19c4bda19a321213084d.scope: Consumed 713ms CPU time, 198.9M memory peak, 177M written to disk. Mar 6 03:02:44.454538 containerd[1544]: time="2026-03-06T03:02:44.454493036Z" level=info msg="received container exit event container_id:\"54002faa2b05b24e07dd03175f6ee6e65133e5ce168e19c4bda19a321213084d\" id:\"54002faa2b05b24e07dd03175f6ee6e65133e5ce168e19c4bda19a321213084d\" pid:3564 exited_at:{seconds:1772766164 nanos:454214707}" Mar 6 03:02:44.492771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54002faa2b05b24e07dd03175f6ee6e65133e5ce168e19c4bda19a321213084d-rootfs.mount: Deactivated successfully. Mar 6 03:02:44.544995 kubelet[2775]: I0306 03:02:44.544528 2775 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 6 03:02:44.788333 systemd[1]: Created slice kubepods-besteffort-pod6617c0cc_bec3_47a6_9592_47297d1680a9.slice - libcontainer container kubepods-besteffort-pod6617c0cc_bec3_47a6_9592_47297d1680a9.slice. Mar 6 03:02:44.801309 systemd[1]: Created slice kubepods-burstable-podd2f1f201_33f8_4652_8176_bbf25d763a0a.slice - libcontainer container kubepods-burstable-podd2f1f201_33f8_4652_8176_bbf25d763a0a.slice. Mar 6 03:02:44.864248 kubelet[2775]: I0306 03:02:44.864178 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2f1f201-33f8-4652-8176-bbf25d763a0a-config-volume\") pod \"coredns-674b8bbfcf-8pvx7\" (UID: \"d2f1f201-33f8-4652-8176-bbf25d763a0a\") " pod="kube-system/coredns-674b8bbfcf-8pvx7" Mar 6 03:02:44.864248 kubelet[2775]: I0306 03:02:44.864237 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sshpc\" (UniqueName: \"kubernetes.io/projected/d2f1f201-33f8-4652-8176-bbf25d763a0a-kube-api-access-sshpc\") pod \"coredns-674b8bbfcf-8pvx7\" (UID: \"d2f1f201-33f8-4652-8176-bbf25d763a0a\") " pod="kube-system/coredns-674b8bbfcf-8pvx7" Mar 6 03:02:44.864591 kubelet[2775]: I0306 03:02:44.864275 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq9j8\" (UniqueName: \"kubernetes.io/projected/6617c0cc-bec3-47a6-9592-47297d1680a9-kube-api-access-jq9j8\") pod \"calico-kube-controllers-7b8cdfbccd-xdsjp\" (UID: \"6617c0cc-bec3-47a6-9592-47297d1680a9\") " pod="calico-system/calico-kube-controllers-7b8cdfbccd-xdsjp" Mar 6 03:02:44.864591 kubelet[2775]: I0306 03:02:44.864316 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6617c0cc-bec3-47a6-9592-47297d1680a9-tigera-ca-bundle\") pod \"calico-kube-controllers-7b8cdfbccd-xdsjp\" (UID: \"6617c0cc-bec3-47a6-9592-47297d1680a9\") " pod="calico-system/calico-kube-controllers-7b8cdfbccd-xdsjp" Mar 6 03:02:45.020456 systemd[1]: Created slice kubepods-burstable-pod4ad502fb_71e7_439a_8f21_0ff5a8f05db0.slice - libcontainer container kubepods-burstable-pod4ad502fb_71e7_439a_8f21_0ff5a8f05db0.slice. Mar 6 03:02:45.074215 systemd[1]: Created slice kubepods-besteffort-poddeaaae01_9814_41f5_a6d2_275edfa5df6e.slice - libcontainer container kubepods-besteffort-poddeaaae01_9814_41f5_a6d2_275edfa5df6e.slice. Mar 6 03:02:45.097881 systemd[1]: Created slice kubepods-besteffort-podde8c1808_0dd2_4e67_850d_f9912f93270e.slice - libcontainer container kubepods-besteffort-podde8c1808_0dd2_4e67_850d_f9912f93270e.slice. Mar 6 03:02:45.100327 containerd[1544]: time="2026-03-06T03:02:45.100280024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b8cdfbccd-xdsjp,Uid:6617c0cc-bec3-47a6-9592-47297d1680a9,Namespace:calico-system,Attempt:0,}" Mar 6 03:02:45.119405 containerd[1544]: time="2026-03-06T03:02:45.118803980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8pvx7,Uid:d2f1f201-33f8-4652-8176-bbf25d763a0a,Namespace:kube-system,Attempt:0,}" Mar 6 03:02:45.128137 systemd[1]: Created slice kubepods-besteffort-podd14ba1a7_66d0_474e_934c_ba181579d0e6.slice - libcontainer container kubepods-besteffort-podd14ba1a7_66d0_474e_934c_ba181579d0e6.slice. Mar 6 03:02:45.157993 systemd[1]: Created slice kubepods-besteffort-pod3cc1a662_81a2_4a4f_9b4a_2a39ee715a74.slice - libcontainer container kubepods-besteffort-pod3cc1a662_81a2_4a4f_9b4a_2a39ee715a74.slice. Mar 6 03:02:45.170264 kubelet[2775]: I0306 03:02:45.170194 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7ctl\" (UniqueName: \"kubernetes.io/projected/deaaae01-9814-41f5-a6d2-275edfa5df6e-kube-api-access-f7ctl\") pod \"goldmane-5b85766d88-mk9x6\" (UID: \"deaaae01-9814-41f5-a6d2-275edfa5df6e\") " pod="calico-system/goldmane-5b85766d88-mk9x6" Mar 6 03:02:45.170264 kubelet[2775]: I0306 03:02:45.170254 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4flc7\" (UniqueName: \"kubernetes.io/projected/d14ba1a7-66d0-474e-934c-ba181579d0e6-kube-api-access-4flc7\") pod \"calico-apiserver-7765f8b4cc-f727z\" (UID: \"d14ba1a7-66d0-474e-934c-ba181579d0e6\") " pod="calico-system/calico-apiserver-7765f8b4cc-f727z" Mar 6 03:02:45.172059 kubelet[2775]: I0306 03:02:45.170286 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zrnq\" (UniqueName: \"kubernetes.io/projected/de8c1808-0dd2-4e67-850d-f9912f93270e-kube-api-access-5zrnq\") pod \"calico-apiserver-7765f8b4cc-c5j8p\" (UID: \"de8c1808-0dd2-4e67-850d-f9912f93270e\") " pod="calico-system/calico-apiserver-7765f8b4cc-c5j8p" Mar 6 03:02:45.172059 kubelet[2775]: I0306 03:02:45.170326 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-whisker-backend-key-pair\") pod \"whisker-7666966469-swtsx\" (UID: \"3cc1a662-81a2-4a4f-9b4a-2a39ee715a74\") " pod="calico-system/whisker-7666966469-swtsx" Mar 6 03:02:45.172059 kubelet[2775]: I0306 03:02:45.170355 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/de8c1808-0dd2-4e67-850d-f9912f93270e-calico-apiserver-certs\") pod \"calico-apiserver-7765f8b4cc-c5j8p\" (UID: \"de8c1808-0dd2-4e67-850d-f9912f93270e\") " pod="calico-system/calico-apiserver-7765f8b4cc-c5j8p" Mar 6 03:02:45.172059 kubelet[2775]: I0306 03:02:45.170383 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d14ba1a7-66d0-474e-934c-ba181579d0e6-calico-apiserver-certs\") pod \"calico-apiserver-7765f8b4cc-f727z\" (UID: \"d14ba1a7-66d0-474e-934c-ba181579d0e6\") " pod="calico-system/calico-apiserver-7765f8b4cc-f727z" Mar 6 03:02:45.172059 kubelet[2775]: I0306 03:02:45.170428 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-nginx-config\") pod \"whisker-7666966469-swtsx\" (UID: \"3cc1a662-81a2-4a4f-9b4a-2a39ee715a74\") " pod="calico-system/whisker-7666966469-swtsx" Mar 6 03:02:45.173165 kubelet[2775]: I0306 03:02:45.170463 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-whisker-ca-bundle\") pod \"whisker-7666966469-swtsx\" (UID: \"3cc1a662-81a2-4a4f-9b4a-2a39ee715a74\") " pod="calico-system/whisker-7666966469-swtsx" Mar 6 03:02:45.173165 kubelet[2775]: I0306 03:02:45.170491 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b78bs\" (UniqueName: \"kubernetes.io/projected/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-kube-api-access-b78bs\") pod \"whisker-7666966469-swtsx\" (UID: \"3cc1a662-81a2-4a4f-9b4a-2a39ee715a74\") " pod="calico-system/whisker-7666966469-swtsx" Mar 6 03:02:45.173165 kubelet[2775]: I0306 03:02:45.170521 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ad502fb-71e7-439a-8f21-0ff5a8f05db0-config-volume\") pod \"coredns-674b8bbfcf-7jb7j\" (UID: \"4ad502fb-71e7-439a-8f21-0ff5a8f05db0\") " pod="kube-system/coredns-674b8bbfcf-7jb7j" Mar 6 03:02:45.173165 kubelet[2775]: I0306 03:02:45.170549 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deaaae01-9814-41f5-a6d2-275edfa5df6e-config\") pod \"goldmane-5b85766d88-mk9x6\" (UID: \"deaaae01-9814-41f5-a6d2-275edfa5df6e\") " pod="calico-system/goldmane-5b85766d88-mk9x6" Mar 6 03:02:45.173165 kubelet[2775]: I0306 03:02:45.170582 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/deaaae01-9814-41f5-a6d2-275edfa5df6e-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-mk9x6\" (UID: \"deaaae01-9814-41f5-a6d2-275edfa5df6e\") " pod="calico-system/goldmane-5b85766d88-mk9x6" Mar 6 03:02:45.174257 kubelet[2775]: I0306 03:02:45.170611 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp8t5\" (UniqueName: \"kubernetes.io/projected/4ad502fb-71e7-439a-8f21-0ff5a8f05db0-kube-api-access-wp8t5\") pod \"coredns-674b8bbfcf-7jb7j\" (UID: \"4ad502fb-71e7-439a-8f21-0ff5a8f05db0\") " pod="kube-system/coredns-674b8bbfcf-7jb7j" Mar 6 03:02:45.174257 kubelet[2775]: I0306 03:02:45.170640 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/deaaae01-9814-41f5-a6d2-275edfa5df6e-goldmane-key-pair\") pod \"goldmane-5b85766d88-mk9x6\" (UID: \"deaaae01-9814-41f5-a6d2-275edfa5df6e\") " pod="calico-system/goldmane-5b85766d88-mk9x6" Mar 6 03:02:45.266037 containerd[1544]: time="2026-03-06T03:02:45.265744877Z" level=info msg="CreateContainer within sandbox \"485c4defbb810eeceb2e55f09bcfca25a1d8174c839392d2e7ac4bc410db0055\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 6 03:02:45.339439 containerd[1544]: time="2026-03-06T03:02:45.339236306Z" level=info msg="Container 7db741994d669819270f8a7c8054c52a56395b4ea4d66023fa11a8c4f38c451b: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:45.368737 containerd[1544]: time="2026-03-06T03:02:45.368665426Z" level=error msg="Failed to destroy network for sandbox \"c0a4f10e9e6c89151853e6ef39d19cd9376c2f69c9e977b87c22b454e4823c7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.373112 containerd[1544]: time="2026-03-06T03:02:45.371718239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b8cdfbccd-xdsjp,Uid:6617c0cc-bec3-47a6-9592-47297d1680a9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0a4f10e9e6c89151853e6ef39d19cd9376c2f69c9e977b87c22b454e4823c7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.375745 kubelet[2775]: E0306 03:02:45.375678 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0a4f10e9e6c89151853e6ef39d19cd9376c2f69c9e977b87c22b454e4823c7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.375897 kubelet[2775]: E0306 03:02:45.375777 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0a4f10e9e6c89151853e6ef39d19cd9376c2f69c9e977b87c22b454e4823c7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b8cdfbccd-xdsjp" Mar 6 03:02:45.375897 kubelet[2775]: E0306 03:02:45.375810 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0a4f10e9e6c89151853e6ef39d19cd9376c2f69c9e977b87c22b454e4823c7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b8cdfbccd-xdsjp" Mar 6 03:02:45.376019 kubelet[2775]: E0306 03:02:45.375886 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b8cdfbccd-xdsjp_calico-system(6617c0cc-bec3-47a6-9592-47297d1680a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b8cdfbccd-xdsjp_calico-system(6617c0cc-bec3-47a6-9592-47297d1680a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0a4f10e9e6c89151853e6ef39d19cd9376c2f69c9e977b87c22b454e4823c7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b8cdfbccd-xdsjp" podUID="6617c0cc-bec3-47a6-9592-47297d1680a9" Mar 6 03:02:45.383530 containerd[1544]: time="2026-03-06T03:02:45.383487752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-mk9x6,Uid:deaaae01-9814-41f5-a6d2-275edfa5df6e,Namespace:calico-system,Attempt:0,}" Mar 6 03:02:45.393092 containerd[1544]: time="2026-03-06T03:02:45.393034534Z" level=info msg="CreateContainer within sandbox \"485c4defbb810eeceb2e55f09bcfca25a1d8174c839392d2e7ac4bc410db0055\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7db741994d669819270f8a7c8054c52a56395b4ea4d66023fa11a8c4f38c451b\"" Mar 6 03:02:45.394601 containerd[1544]: time="2026-03-06T03:02:45.394570541Z" level=info msg="StartContainer for \"7db741994d669819270f8a7c8054c52a56395b4ea4d66023fa11a8c4f38c451b\"" Mar 6 03:02:45.399023 containerd[1544]: time="2026-03-06T03:02:45.398989336Z" level=info msg="connecting to shim 7db741994d669819270f8a7c8054c52a56395b4ea4d66023fa11a8c4f38c451b" address="unix:///run/containerd/s/6d4a85d20b847f883a50cc54a74d84079ee75e3e219dfdc2f4a9bf2cc3e9f56f" protocol=ttrpc version=3 Mar 6 03:02:45.425021 containerd[1544]: time="2026-03-06T03:02:45.424973039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7765f8b4cc-c5j8p,Uid:de8c1808-0dd2-4e67-850d-f9912f93270e,Namespace:calico-system,Attempt:0,}" Mar 6 03:02:45.425479 containerd[1544]: time="2026-03-06T03:02:45.425435976Z" level=error msg="Failed to destroy network for sandbox \"4e722d4e1bd2531b21ec0056c7ca5f8d7fbed8a56153773a872af2ed36a1e0d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.429596 containerd[1544]: time="2026-03-06T03:02:45.429536274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8pvx7,Uid:d2f1f201-33f8-4652-8176-bbf25d763a0a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e722d4e1bd2531b21ec0056c7ca5f8d7fbed8a56153773a872af2ed36a1e0d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.430155 kubelet[2775]: E0306 03:02:45.430012 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e722d4e1bd2531b21ec0056c7ca5f8d7fbed8a56153773a872af2ed36a1e0d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.430155 kubelet[2775]: E0306 03:02:45.430130 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e722d4e1bd2531b21ec0056c7ca5f8d7fbed8a56153773a872af2ed36a1e0d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8pvx7" Mar 6 03:02:45.430155 kubelet[2775]: E0306 03:02:45.430188 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e722d4e1bd2531b21ec0056c7ca5f8d7fbed8a56153773a872af2ed36a1e0d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8pvx7" Mar 6 03:02:45.430642 kubelet[2775]: E0306 03:02:45.430261 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8pvx7_kube-system(d2f1f201-33f8-4652-8176-bbf25d763a0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8pvx7_kube-system(d2f1f201-33f8-4652-8176-bbf25d763a0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e722d4e1bd2531b21ec0056c7ca5f8d7fbed8a56153773a872af2ed36a1e0d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8pvx7" podUID="d2f1f201-33f8-4652-8176-bbf25d763a0a" Mar 6 03:02:45.447285 containerd[1544]: time="2026-03-06T03:02:45.447157642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7765f8b4cc-f727z,Uid:d14ba1a7-66d0-474e-934c-ba181579d0e6,Namespace:calico-system,Attempt:0,}" Mar 6 03:02:45.456550 systemd[1]: Started cri-containerd-7db741994d669819270f8a7c8054c52a56395b4ea4d66023fa11a8c4f38c451b.scope - libcontainer container 7db741994d669819270f8a7c8054c52a56395b4ea4d66023fa11a8c4f38c451b. Mar 6 03:02:45.475381 containerd[1544]: time="2026-03-06T03:02:45.475326568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7666966469-swtsx,Uid:3cc1a662-81a2-4a4f-9b4a-2a39ee715a74,Namespace:calico-system,Attempt:0,}" Mar 6 03:02:45.535432 systemd[1]: run-netns-cni\x2d2aaf75b0\x2d526d\x2d0fce\x2db3ff\x2dd0164df7e0d9.mount: Deactivated successfully. Mar 6 03:02:45.536627 systemd[1]: run-netns-cni\x2dd8814b7b\x2d0f94\x2d1f23\x2d337a\x2dbf005a1efbeb.mount: Deactivated successfully. Mar 6 03:02:45.627734 containerd[1544]: time="2026-03-06T03:02:45.627683417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7jb7j,Uid:4ad502fb-71e7-439a-8f21-0ff5a8f05db0,Namespace:kube-system,Attempt:0,}" Mar 6 03:02:45.753549 containerd[1544]: time="2026-03-06T03:02:45.752803064Z" level=info msg="StartContainer for \"7db741994d669819270f8a7c8054c52a56395b4ea4d66023fa11a8c4f38c451b\" returns successfully" Mar 6 03:02:45.772669 containerd[1544]: time="2026-03-06T03:02:45.772586072Z" level=error msg="Failed to destroy network for sandbox \"2911ae073293cb4cf003b91094bee3f1afaa1dd03c6ce374437998db9a9a3002\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.777780 systemd[1]: run-netns-cni\x2d19696447\x2ddef1\x2d1038\x2d47cc\x2dd0d8051bc9a4.mount: Deactivated successfully. Mar 6 03:02:45.780641 containerd[1544]: time="2026-03-06T03:02:45.780573757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7666966469-swtsx,Uid:3cc1a662-81a2-4a4f-9b4a-2a39ee715a74,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2911ae073293cb4cf003b91094bee3f1afaa1dd03c6ce374437998db9a9a3002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.782427 kubelet[2775]: E0306 03:02:45.780976 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2911ae073293cb4cf003b91094bee3f1afaa1dd03c6ce374437998db9a9a3002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.782427 kubelet[2775]: E0306 03:02:45.781049 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2911ae073293cb4cf003b91094bee3f1afaa1dd03c6ce374437998db9a9a3002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7666966469-swtsx" Mar 6 03:02:45.782427 kubelet[2775]: E0306 03:02:45.781840 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2911ae073293cb4cf003b91094bee3f1afaa1dd03c6ce374437998db9a9a3002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7666966469-swtsx" Mar 6 03:02:45.782778 kubelet[2775]: E0306 03:02:45.781950 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7666966469-swtsx_calico-system(3cc1a662-81a2-4a4f-9b4a-2a39ee715a74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7666966469-swtsx_calico-system(3cc1a662-81a2-4a4f-9b4a-2a39ee715a74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2911ae073293cb4cf003b91094bee3f1afaa1dd03c6ce374437998db9a9a3002\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7666966469-swtsx" podUID="3cc1a662-81a2-4a4f-9b4a-2a39ee715a74" Mar 6 03:02:45.797316 containerd[1544]: time="2026-03-06T03:02:45.797263397Z" level=error msg="Failed to destroy network for sandbox \"259565c47b9a6fb62d8e1d5c650c323ce4a17d94eb118fcf1f996ec495a3d7a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.804275 containerd[1544]: time="2026-03-06T03:02:45.804206361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-mk9x6,Uid:deaaae01-9814-41f5-a6d2-275edfa5df6e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"259565c47b9a6fb62d8e1d5c650c323ce4a17d94eb118fcf1f996ec495a3d7a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.804488 systemd[1]: run-netns-cni\x2d5d30fdc5\x2dc1fb\x2d962c\x2d7e94\x2d6af2432b568b.mount: Deactivated successfully. Mar 6 03:02:45.805729 kubelet[2775]: E0306 03:02:45.805142 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"259565c47b9a6fb62d8e1d5c650c323ce4a17d94eb118fcf1f996ec495a3d7a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.805729 kubelet[2775]: E0306 03:02:45.805243 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"259565c47b9a6fb62d8e1d5c650c323ce4a17d94eb118fcf1f996ec495a3d7a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-mk9x6" Mar 6 03:02:45.805729 kubelet[2775]: E0306 03:02:45.805292 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"259565c47b9a6fb62d8e1d5c650c323ce4a17d94eb118fcf1f996ec495a3d7a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-mk9x6" Mar 6 03:02:45.805964 kubelet[2775]: E0306 03:02:45.805377 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-mk9x6_calico-system(deaaae01-9814-41f5-a6d2-275edfa5df6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-mk9x6_calico-system(deaaae01-9814-41f5-a6d2-275edfa5df6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"259565c47b9a6fb62d8e1d5c650c323ce4a17d94eb118fcf1f996ec495a3d7a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-mk9x6" podUID="deaaae01-9814-41f5-a6d2-275edfa5df6e" Mar 6 03:02:45.876810 containerd[1544]: time="2026-03-06T03:02:45.876739582Z" level=error msg="Failed to destroy network for sandbox \"8c560c91249ce1b91d7c56803ee8c3b15eee6d30b2fd3502d04620e385416862\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.881186 containerd[1544]: time="2026-03-06T03:02:45.879172810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7765f8b4cc-c5j8p,Uid:de8c1808-0dd2-4e67-850d-f9912f93270e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c560c91249ce1b91d7c56803ee8c3b15eee6d30b2fd3502d04620e385416862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.882299 kubelet[2775]: E0306 03:02:45.882012 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c560c91249ce1b91d7c56803ee8c3b15eee6d30b2fd3502d04620e385416862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.885750 kubelet[2775]: E0306 03:02:45.883661 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c560c91249ce1b91d7c56803ee8c3b15eee6d30b2fd3502d04620e385416862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7765f8b4cc-c5j8p" Mar 6 03:02:45.885750 kubelet[2775]: E0306 03:02:45.883736 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c560c91249ce1b91d7c56803ee8c3b15eee6d30b2fd3502d04620e385416862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7765f8b4cc-c5j8p" Mar 6 03:02:45.885750 kubelet[2775]: E0306 03:02:45.883880 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7765f8b4cc-c5j8p_calico-system(de8c1808-0dd2-4e67-850d-f9912f93270e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7765f8b4cc-c5j8p_calico-system(de8c1808-0dd2-4e67-850d-f9912f93270e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c560c91249ce1b91d7c56803ee8c3b15eee6d30b2fd3502d04620e385416862\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7765f8b4cc-c5j8p" podUID="de8c1808-0dd2-4e67-850d-f9912f93270e" Mar 6 03:02:45.886042 containerd[1544]: time="2026-03-06T03:02:45.885572546Z" level=error msg="Failed to destroy network for sandbox \"ff3ed3fb55f8695d27af0358a7d37d95e6738603b70ed9dbdf08f3f3ccf7cd38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.884331 systemd[1]: run-netns-cni\x2d2be71886\x2d2387\x2d7379\x2df056\x2d870a5eebc9d7.mount: Deactivated successfully. Mar 6 03:02:45.887730 containerd[1544]: time="2026-03-06T03:02:45.887682208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7765f8b4cc-f727z,Uid:d14ba1a7-66d0-474e-934c-ba181579d0e6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff3ed3fb55f8695d27af0358a7d37d95e6738603b70ed9dbdf08f3f3ccf7cd38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.889012 kubelet[2775]: E0306 03:02:45.888969 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff3ed3fb55f8695d27af0358a7d37d95e6738603b70ed9dbdf08f3f3ccf7cd38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.889151 kubelet[2775]: E0306 03:02:45.889032 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff3ed3fb55f8695d27af0358a7d37d95e6738603b70ed9dbdf08f3f3ccf7cd38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7765f8b4cc-f727z" Mar 6 03:02:45.889151 kubelet[2775]: E0306 03:02:45.889064 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff3ed3fb55f8695d27af0358a7d37d95e6738603b70ed9dbdf08f3f3ccf7cd38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7765f8b4cc-f727z" Mar 6 03:02:45.889294 kubelet[2775]: E0306 03:02:45.889149 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7765f8b4cc-f727z_calico-system(d14ba1a7-66d0-474e-934c-ba181579d0e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7765f8b4cc-f727z_calico-system(d14ba1a7-66d0-474e-934c-ba181579d0e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff3ed3fb55f8695d27af0358a7d37d95e6738603b70ed9dbdf08f3f3ccf7cd38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7765f8b4cc-f727z" podUID="d14ba1a7-66d0-474e-934c-ba181579d0e6" Mar 6 03:02:45.915883 containerd[1544]: time="2026-03-06T03:02:45.915816164Z" level=error msg="Failed to destroy network for sandbox \"d26f3280b87896ecb076a08a35eb39118fdbbe7d2dd838079c6dc424ef557004\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.917833 containerd[1544]: time="2026-03-06T03:02:45.917681670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7jb7j,Uid:4ad502fb-71e7-439a-8f21-0ff5a8f05db0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d26f3280b87896ecb076a08a35eb39118fdbbe7d2dd838079c6dc424ef557004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.918178 kubelet[2775]: E0306 03:02:45.918132 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d26f3280b87896ecb076a08a35eb39118fdbbe7d2dd838079c6dc424ef557004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 03:02:45.918315 kubelet[2775]: E0306 03:02:45.918204 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d26f3280b87896ecb076a08a35eb39118fdbbe7d2dd838079c6dc424ef557004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7jb7j" Mar 6 03:02:45.918315 kubelet[2775]: E0306 03:02:45.918235 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d26f3280b87896ecb076a08a35eb39118fdbbe7d2dd838079c6dc424ef557004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7jb7j" Mar 6 03:02:45.918437 kubelet[2775]: E0306 03:02:45.918311 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7jb7j_kube-system(4ad502fb-71e7-439a-8f21-0ff5a8f05db0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7jb7j_kube-system(4ad502fb-71e7-439a-8f21-0ff5a8f05db0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d26f3280b87896ecb076a08a35eb39118fdbbe7d2dd838079c6dc424ef557004\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7jb7j" podUID="4ad502fb-71e7-439a-8f21-0ff5a8f05db0" Mar 6 03:02:45.979722 systemd[1]: Created slice kubepods-besteffort-pod92f0c9a8_d6bf_42d2_b2a2_ef49488f9604.slice - libcontainer container kubepods-besteffort-pod92f0c9a8_d6bf_42d2_b2a2_ef49488f9604.slice. Mar 6 03:02:45.988120 containerd[1544]: time="2026-03-06T03:02:45.987719341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fkzvq,Uid:92f0c9a8-d6bf-42d2-b2a2-ef49488f9604,Namespace:calico-system,Attempt:0,}" Mar 6 03:02:46.214989 systemd-networkd[1410]: cali357e2024036: Link UP Mar 6 03:02:46.217499 systemd-networkd[1410]: cali357e2024036: Gained carrier Mar 6 03:02:46.275618 containerd[1544]: 2026-03-06 03:02:46.044 [ERROR][3817] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 03:02:46.275618 containerd[1544]: 2026-03-06 03:02:46.067 [INFO][3817] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-csi--node--driver--fkzvq-eth0 csi-node-driver- calico-system 92f0c9a8-d6bf-42d2-b2a2-ef49488f9604 706 0 2026-03-06 03:02:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011 csi-node-driver-fkzvq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali357e2024036 [] [] }} ContainerID="5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" Namespace="calico-system" Pod="csi-node-driver-fkzvq" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-csi--node--driver--fkzvq-" Mar 6 03:02:46.275618 containerd[1544]: 2026-03-06 03:02:46.067 [INFO][3817] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" Namespace="calico-system" Pod="csi-node-driver-fkzvq" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-csi--node--driver--fkzvq-eth0" Mar 6 03:02:46.275618 containerd[1544]: 2026-03-06 03:02:46.116 [INFO][3835] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" HandleID="k8s-pod-network.5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-csi--node--driver--fkzvq-eth0" Mar 6 03:02:46.275982 containerd[1544]: 2026-03-06 03:02:46.127 [INFO][3835] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" HandleID="k8s-pod-network.5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-csi--node--driver--fkzvq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fddc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", "pod":"csi-node-driver-fkzvq", "timestamp":"2026-03-06 03:02:46.116392623 +0000 UTC"}, Hostname:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001886e0)} Mar 6 03:02:46.275982 containerd[1544]: 2026-03-06 03:02:46.127 [INFO][3835] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:02:46.275982 containerd[1544]: 2026-03-06 03:02:46.127 [INFO][3835] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:02:46.275982 containerd[1544]: 2026-03-06 03:02:46.128 [INFO][3835] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011' Mar 6 03:02:46.275982 containerd[1544]: 2026-03-06 03:02:46.130 [INFO][3835] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:46.275982 containerd[1544]: 2026-03-06 03:02:46.136 [INFO][3835] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:46.275982 containerd[1544]: 2026-03-06 03:02:46.151 [INFO][3835] ipam/ipam.go 558: Ran out of existing affine blocks for host host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:46.277422 containerd[1544]: 2026-03-06 03:02:46.154 [INFO][3835] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:46.277422 containerd[1544]: 2026-03-06 03:02:46.157 [INFO][3835] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.21.128/26 Mar 6 03:02:46.277422 containerd[1544]: 2026-03-06 03:02:46.158 [INFO][3835] ipam/ipam.go 588: Found unclaimed block in 3.51272ms host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" subnet=192.168.21.128/26 Mar 6 03:02:46.277422 containerd[1544]: 2026-03-06 03:02:46.158 [INFO][3835] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" subnet=192.168.21.128/26 Mar 6 03:02:46.277422 containerd[1544]: 2026-03-06 03:02:46.162 [INFO][3835] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" subnet=192.168.21.128/26 Mar 6 03:02:46.277422 containerd[1544]: 2026-03-06 03:02:46.162 [INFO][3835] ipam/ipam.go 160: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:46.277422 containerd[1544]: 2026-03-06 03:02:46.164 [INFO][3835] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:46.277422 containerd[1544]: 2026-03-06 03:02:46.167 [INFO][3835] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:46.277422 containerd[1544]: 2026-03-06 03:02:46.169 [INFO][3835] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:46.277422 containerd[1544]: 2026-03-06 03:02:46.169 [INFO][3835] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" subnet=192.168.21.128/26 Mar 6 03:02:46.277422 containerd[1544]: 2026-03-06 03:02:46.174 [INFO][3835] ipam/ipam_block_reader_writer.go 267: Successfully created block Mar 6 03:02:46.279906 containerd[1544]: 2026-03-06 03:02:46.174 [INFO][3835] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" subnet=192.168.21.128/26 Mar 6 03:02:46.279906 containerd[1544]: 2026-03-06 03:02:46.180 [INFO][3835] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" subnet=192.168.21.128/26 Mar 6 03:02:46.279906 containerd[1544]: 2026-03-06 03:02:46.180 [INFO][3835] ipam/ipam.go 623: Block '192.168.21.128/26' has 64 free ips which is more than 1 ips required. host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" subnet=192.168.21.128/26 Mar 6 03:02:46.279906 containerd[1544]: 2026-03-06 03:02:46.180 [INFO][3835] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:46.279906 containerd[1544]: 2026-03-06 03:02:46.181 [INFO][3835] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21 Mar 6 03:02:46.279906 containerd[1544]: 2026-03-06 03:02:46.185 [INFO][3835] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:46.279906 containerd[1544]: 2026-03-06 03:02:46.192 [INFO][3835] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.21.128/26] block=192.168.21.128/26 handle="k8s-pod-network.5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:46.279906 containerd[1544]: 2026-03-06 03:02:46.193 [INFO][3835] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.21.128/26] handle="k8s-pod-network.5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:46.279906 containerd[1544]: 2026-03-06 03:02:46.193 [INFO][3835] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:02:46.282684 containerd[1544]: 2026-03-06 03:02:46.193 [INFO][3835] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.21.128/26] IPv6=[] ContainerID="5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" HandleID="k8s-pod-network.5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-csi--node--driver--fkzvq-eth0" Mar 6 03:02:46.282752 kubelet[2775]: I0306 03:02:46.281613 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-56tbf" podStartSLOduration=4.105157704 podStartE2EDuration="20.281588029s" podCreationTimestamp="2026-03-06 03:02:26 +0000 UTC" firstStartedPulling="2026-03-06 03:02:27.056326118 +0000 UTC m=+22.338515406" lastFinishedPulling="2026-03-06 03:02:43.232756453 +0000 UTC m=+38.514945731" observedRunningTime="2026-03-06 03:02:46.278617284 +0000 UTC m=+41.560806613" watchObservedRunningTime="2026-03-06 03:02:46.281588029 +0000 UTC m=+41.563777334" Mar 6 03:02:46.283260 containerd[1544]: 2026-03-06 03:02:46.197 [INFO][3817] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" Namespace="calico-system" Pod="csi-node-driver-fkzvq" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-csi--node--driver--fkzvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-csi--node--driver--fkzvq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"92f0c9a8-d6bf-42d2-b2a2-ef49488f9604", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"", Pod:"csi-node-driver-fkzvq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali357e2024036", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:02:46.283260 containerd[1544]: 2026-03-06 03:02:46.197 [INFO][3817] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.128/32] ContainerID="5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" Namespace="calico-system" Pod="csi-node-driver-fkzvq" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-csi--node--driver--fkzvq-eth0" Mar 6 03:02:46.283260 containerd[1544]: 2026-03-06 03:02:46.197 [INFO][3817] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali357e2024036 ContainerID="5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" Namespace="calico-system" Pod="csi-node-driver-fkzvq" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-csi--node--driver--fkzvq-eth0" Mar 6 03:02:46.283260 containerd[1544]: 2026-03-06 03:02:46.220 [INFO][3817] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" Namespace="calico-system" Pod="csi-node-driver-fkzvq" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-csi--node--driver--fkzvq-eth0" Mar 6 03:02:46.283260 containerd[1544]: 2026-03-06 03:02:46.221 [INFO][3817] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" Namespace="calico-system" Pod="csi-node-driver-fkzvq" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-csi--node--driver--fkzvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-csi--node--driver--fkzvq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"92f0c9a8-d6bf-42d2-b2a2-ef49488f9604", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21", Pod:"csi-node-driver-fkzvq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali357e2024036", MAC:"9e:98:19:32:5e:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:02:46.283700 containerd[1544]: 2026-03-06 03:02:46.249 [INFO][3817] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" Namespace="calico-system" Pod="csi-node-driver-fkzvq" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-csi--node--driver--fkzvq-eth0" Mar 6 03:02:46.326017 containerd[1544]: time="2026-03-06T03:02:46.325940435Z" level=info msg="connecting to shim 5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21" address="unix:///run/containerd/s/249990abe4addc9f451bdc20469919b1764f2abdbe7a3d79a7c228295adb55ca" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:46.377133 systemd[1]: Started cri-containerd-5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21.scope - libcontainer container 5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21. Mar 6 03:02:46.391457 kubelet[2775]: I0306 03:02:46.390781 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b78bs\" (UniqueName: \"kubernetes.io/projected/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-kube-api-access-b78bs\") pod \"3cc1a662-81a2-4a4f-9b4a-2a39ee715a74\" (UID: \"3cc1a662-81a2-4a4f-9b4a-2a39ee715a74\") " Mar 6 03:02:46.391457 kubelet[2775]: I0306 03:02:46.391372 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-nginx-config\") pod \"3cc1a662-81a2-4a4f-9b4a-2a39ee715a74\" (UID: \"3cc1a662-81a2-4a4f-9b4a-2a39ee715a74\") " Mar 6 03:02:46.391457 kubelet[2775]: I0306 03:02:46.391407 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-whisker-ca-bundle\") pod \"3cc1a662-81a2-4a4f-9b4a-2a39ee715a74\" (UID: \"3cc1a662-81a2-4a4f-9b4a-2a39ee715a74\") " Mar 6 03:02:46.394763 kubelet[2775]: I0306 03:02:46.392187 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-whisker-backend-key-pair\") pod \"3cc1a662-81a2-4a4f-9b4a-2a39ee715a74\" (UID: \"3cc1a662-81a2-4a4f-9b4a-2a39ee715a74\") " Mar 6 03:02:46.394763 kubelet[2775]: I0306 03:02:46.392993 2775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3cc1a662-81a2-4a4f-9b4a-2a39ee715a74" (UID: "3cc1a662-81a2-4a4f-9b4a-2a39ee715a74"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 03:02:46.395635 kubelet[2775]: I0306 03:02:46.395316 2775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "3cc1a662-81a2-4a4f-9b4a-2a39ee715a74" (UID: "3cc1a662-81a2-4a4f-9b4a-2a39ee715a74"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 03:02:46.404147 kubelet[2775]: I0306 03:02:46.403898 2775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-kube-api-access-b78bs" (OuterVolumeSpecName: "kube-api-access-b78bs") pod "3cc1a662-81a2-4a4f-9b4a-2a39ee715a74" (UID: "3cc1a662-81a2-4a4f-9b4a-2a39ee715a74"). InnerVolumeSpecName "kube-api-access-b78bs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 03:02:46.409932 kubelet[2775]: I0306 03:02:46.409527 2775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3cc1a662-81a2-4a4f-9b4a-2a39ee715a74" (UID: "3cc1a662-81a2-4a4f-9b4a-2a39ee715a74"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 03:02:46.470630 containerd[1544]: time="2026-03-06T03:02:46.470484378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fkzvq,Uid:92f0c9a8-d6bf-42d2-b2a2-ef49488f9604,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21\"" Mar 6 03:02:46.475498 containerd[1544]: time="2026-03-06T03:02:46.475453178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 6 03:02:46.493483 kubelet[2775]: I0306 03:02:46.493361 2775 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-whisker-backend-key-pair\") on node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" DevicePath \"\"" Mar 6 03:02:46.493483 kubelet[2775]: I0306 03:02:46.493404 2775 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b78bs\" (UniqueName: \"kubernetes.io/projected/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-kube-api-access-b78bs\") on node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" DevicePath \"\"" Mar 6 03:02:46.493483 kubelet[2775]: I0306 03:02:46.493423 2775 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-nginx-config\") on node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" DevicePath \"\"" Mar 6 03:02:46.493483 kubelet[2775]: I0306 03:02:46.493441 2775 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74-whisker-ca-bundle\") on node \"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011\" DevicePath \"\"" Mar 6 03:02:46.498237 systemd[1]: run-netns-cni\x2d319d17d9\x2db996\x2d0136\x2d0269\x2db4f39e362efd.mount: Deactivated successfully. Mar 6 03:02:46.498376 systemd[1]: run-netns-cni\x2d36fc61a5\x2d2c7f\x2dc5c4\x2d55c0\x2ddb308e4b321c.mount: Deactivated successfully. Mar 6 03:02:46.498500 systemd[1]: var-lib-kubelet-pods-3cc1a662\x2d81a2\x2d4a4f\x2d9b4a\x2d2a39ee715a74-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db78bs.mount: Deactivated successfully. Mar 6 03:02:46.498611 systemd[1]: var-lib-kubelet-pods-3cc1a662\x2d81a2\x2d4a4f\x2d9b4a\x2d2a39ee715a74-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 6 03:02:46.975659 systemd[1]: Removed slice kubepods-besteffort-pod3cc1a662_81a2_4a4f_9b4a_2a39ee715a74.slice - libcontainer container kubepods-besteffort-pod3cc1a662_81a2_4a4f_9b4a_2a39ee715a74.slice. Mar 6 03:02:47.315396 systemd-networkd[1410]: cali357e2024036: Gained IPv6LL Mar 6 03:02:47.367760 systemd[1]: Created slice kubepods-besteffort-pod31ddb00f_694c_4d66_8f00_ad9e25df3e4f.slice - libcontainer container kubepods-besteffort-pod31ddb00f_694c_4d66_8f00_ad9e25df3e4f.slice. Mar 6 03:02:47.505071 kubelet[2775]: I0306 03:02:47.504355 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/31ddb00f-694c-4d66-8f00-ad9e25df3e4f-whisker-backend-key-pair\") pod \"whisker-d4c6845f8-rpvhn\" (UID: \"31ddb00f-694c-4d66-8f00-ad9e25df3e4f\") " pod="calico-system/whisker-d4c6845f8-rpvhn" Mar 6 03:02:47.507219 kubelet[2775]: I0306 03:02:47.505835 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ddb00f-694c-4d66-8f00-ad9e25df3e4f-whisker-ca-bundle\") pod \"whisker-d4c6845f8-rpvhn\" (UID: \"31ddb00f-694c-4d66-8f00-ad9e25df3e4f\") " pod="calico-system/whisker-d4c6845f8-rpvhn" Mar 6 03:02:47.507219 kubelet[2775]: I0306 03:02:47.506208 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/31ddb00f-694c-4d66-8f00-ad9e25df3e4f-nginx-config\") pod \"whisker-d4c6845f8-rpvhn\" (UID: \"31ddb00f-694c-4d66-8f00-ad9e25df3e4f\") " pod="calico-system/whisker-d4c6845f8-rpvhn" Mar 6 03:02:47.507219 kubelet[2775]: I0306 03:02:47.507178 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw5fb\" (UniqueName: \"kubernetes.io/projected/31ddb00f-694c-4d66-8f00-ad9e25df3e4f-kube-api-access-vw5fb\") pod \"whisker-d4c6845f8-rpvhn\" (UID: \"31ddb00f-694c-4d66-8f00-ad9e25df3e4f\") " pod="calico-system/whisker-d4c6845f8-rpvhn" Mar 6 03:02:47.680097 containerd[1544]: time="2026-03-06T03:02:47.679912886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d4c6845f8-rpvhn,Uid:31ddb00f-694c-4d66-8f00-ad9e25df3e4f,Namespace:calico-system,Attempt:0,}" Mar 6 03:02:47.774554 containerd[1544]: time="2026-03-06T03:02:47.774493904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:47.777121 containerd[1544]: time="2026-03-06T03:02:47.776651700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 6 03:02:47.780050 containerd[1544]: time="2026-03-06T03:02:47.779996169Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:47.787105 containerd[1544]: time="2026-03-06T03:02:47.787036179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:47.788202 containerd[1544]: time="2026-03-06T03:02:47.788121889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.312614946s" Mar 6 03:02:47.788202 containerd[1544]: time="2026-03-06T03:02:47.788164813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 6 03:02:47.797098 containerd[1544]: time="2026-03-06T03:02:47.797035333Z" level=info msg="CreateContainer within sandbox \"5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 6 03:02:47.827494 containerd[1544]: time="2026-03-06T03:02:47.822864447Z" level=info msg="Container 201c69d6c3a0f2ee9a1b0991504f737c944d7d0ed573d75946ef056d4c3e7b68: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:47.839259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2576871324.mount: Deactivated successfully. Mar 6 03:02:47.853980 containerd[1544]: time="2026-03-06T03:02:47.853158641Z" level=info msg="CreateContainer within sandbox \"5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"201c69d6c3a0f2ee9a1b0991504f737c944d7d0ed573d75946ef056d4c3e7b68\"" Mar 6 03:02:47.855633 containerd[1544]: time="2026-03-06T03:02:47.855404621Z" level=info msg="StartContainer for \"201c69d6c3a0f2ee9a1b0991504f737c944d7d0ed573d75946ef056d4c3e7b68\"" Mar 6 03:02:47.866863 containerd[1544]: time="2026-03-06T03:02:47.866245493Z" level=info msg="connecting to shim 201c69d6c3a0f2ee9a1b0991504f737c944d7d0ed573d75946ef056d4c3e7b68" address="unix:///run/containerd/s/249990abe4addc9f451bdc20469919b1764f2abdbe7a3d79a7c228295adb55ca" protocol=ttrpc version=3 Mar 6 03:02:47.923757 systemd[1]: Started cri-containerd-201c69d6c3a0f2ee9a1b0991504f737c944d7d0ed573d75946ef056d4c3e7b68.scope - libcontainer container 201c69d6c3a0f2ee9a1b0991504f737c944d7d0ed573d75946ef056d4c3e7b68. Mar 6 03:02:47.985719 systemd-networkd[1410]: cali5b2f9e8b4de: Link UP Mar 6 03:02:47.990482 systemd-networkd[1410]: cali5b2f9e8b4de: Gained carrier Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.766 [ERROR][4024] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.796 [INFO][4024] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-whisker--d4c6845f8--rpvhn-eth0 whisker-d4c6845f8- calico-system 31ddb00f-694c-4d66-8f00-ad9e25df3e4f 903 0 2026-03-06 03:02:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d4c6845f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011 whisker-d4c6845f8-rpvhn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5b2f9e8b4de [] [] }} ContainerID="13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" Namespace="calico-system" Pod="whisker-d4c6845f8-rpvhn" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-whisker--d4c6845f8--rpvhn-" Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.798 [INFO][4024] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" Namespace="calico-system" Pod="whisker-d4c6845f8-rpvhn" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-whisker--d4c6845f8--rpvhn-eth0" Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.910 [INFO][4050] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" HandleID="k8s-pod-network.13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-whisker--d4c6845f8--rpvhn-eth0" Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.935 [INFO][4050] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" HandleID="k8s-pod-network.13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-whisker--d4c6845f8--rpvhn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fdc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", "pod":"whisker-d4c6845f8-rpvhn", "timestamp":"2026-03-06 03:02:47.910468568 +0000 UTC"}, Hostname:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00017cdc0)} Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.936 [INFO][4050] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.936 [INFO][4050] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.936 [INFO][4050] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011' Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.940 [INFO][4050] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.946 [INFO][4050] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.954 [INFO][4050] ipam/ipam.go 526: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.957 [INFO][4050] ipam/ipam.go 160: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.959 [INFO][4050] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.961 [INFO][4050] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.963 [INFO][4050] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4 Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.968 [INFO][4050] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:48.031046 containerd[1544]: 2026-03-06 03:02:47.977 [INFO][4050] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.21.130/26] block=192.168.21.128/26 handle="k8s-pod-network.13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:48.032040 containerd[1544]: 2026-03-06 03:02:47.978 [INFO][4050] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.21.130/26] handle="k8s-pod-network.13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:48.032040 containerd[1544]: 2026-03-06 03:02:47.978 [INFO][4050] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:02:48.032040 containerd[1544]: 2026-03-06 03:02:47.978 [INFO][4050] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.21.130/26] IPv6=[] ContainerID="13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" HandleID="k8s-pod-network.13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-whisker--d4c6845f8--rpvhn-eth0" Mar 6 03:02:48.032040 containerd[1544]: 2026-03-06 03:02:47.982 [INFO][4024] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" Namespace="calico-system" Pod="whisker-d4c6845f8-rpvhn" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-whisker--d4c6845f8--rpvhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-whisker--d4c6845f8--rpvhn-eth0", GenerateName:"whisker-d4c6845f8-", Namespace:"calico-system", SelfLink:"", UID:"31ddb00f-694c-4d66-8f00-ad9e25df3e4f", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d4c6845f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"", Pod:"whisker-d4c6845f8-rpvhn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5b2f9e8b4de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:02:48.032040 containerd[1544]: 2026-03-06 03:02:47.982 [INFO][4024] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.130/32] ContainerID="13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" Namespace="calico-system" Pod="whisker-d4c6845f8-rpvhn" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-whisker--d4c6845f8--rpvhn-eth0" Mar 6 03:02:48.032040 containerd[1544]: 2026-03-06 03:02:47.982 [INFO][4024] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b2f9e8b4de ContainerID="13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" Namespace="calico-system" Pod="whisker-d4c6845f8-rpvhn" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-whisker--d4c6845f8--rpvhn-eth0" Mar 6 03:02:48.032040 containerd[1544]: 2026-03-06 03:02:47.986 [INFO][4024] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" Namespace="calico-system" Pod="whisker-d4c6845f8-rpvhn" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-whisker--d4c6845f8--rpvhn-eth0" Mar 6 03:02:48.032915 containerd[1544]: 2026-03-06 03:02:47.986 [INFO][4024] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" Namespace="calico-system" Pod="whisker-d4c6845f8-rpvhn" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-whisker--d4c6845f8--rpvhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-whisker--d4c6845f8--rpvhn-eth0", GenerateName:"whisker-d4c6845f8-", Namespace:"calico-system", SelfLink:"", UID:"31ddb00f-694c-4d66-8f00-ad9e25df3e4f", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d4c6845f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4", Pod:"whisker-d4c6845f8-rpvhn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5b2f9e8b4de", MAC:"26:06:59:fc:6b:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:02:48.032915 containerd[1544]: 2026-03-06 03:02:48.011 [INFO][4024] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" Namespace="calico-system" Pod="whisker-d4c6845f8-rpvhn" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-whisker--d4c6845f8--rpvhn-eth0" Mar 6 03:02:48.104007 containerd[1544]: time="2026-03-06T03:02:48.103943198Z" level=info msg="connecting to shim 13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4" address="unix:///run/containerd/s/844c916a51180a49eeda4dbcbb0ab4813acf7bc75e3878982f4680f852f19ba8" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:48.183769 systemd[1]: Started cri-containerd-13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4.scope - libcontainer container 13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4. Mar 6 03:02:48.213105 containerd[1544]: time="2026-03-06T03:02:48.213036164Z" level=info msg="StartContainer for \"201c69d6c3a0f2ee9a1b0991504f737c944d7d0ed573d75946ef056d4c3e7b68\" returns successfully" Mar 6 03:02:48.217442 containerd[1544]: time="2026-03-06T03:02:48.217391474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 6 03:02:48.366657 containerd[1544]: time="2026-03-06T03:02:48.366590336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d4c6845f8-rpvhn,Uid:31ddb00f-694c-4d66-8f00-ad9e25df3e4f,Namespace:calico-system,Attempt:0,} returns sandbox id \"13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4\"" Mar 6 03:02:48.976883 kubelet[2775]: I0306 03:02:48.976786 2775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc1a662-81a2-4a4f-9b4a-2a39ee715a74" path="/var/lib/kubelet/pods/3cc1a662-81a2-4a4f-9b4a-2a39ee715a74/volumes" Mar 6 03:02:49.315281 systemd-networkd[1410]: vxlan.calico: Link UP Mar 6 03:02:49.315293 systemd-networkd[1410]: vxlan.calico: Gained carrier Mar 6 03:02:49.811274 systemd-networkd[1410]: cali5b2f9e8b4de: Gained IPv6LL Mar 6 03:02:49.884404 containerd[1544]: time="2026-03-06T03:02:49.884350929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:49.887379 containerd[1544]: time="2026-03-06T03:02:49.887332677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 6 03:02:49.888796 containerd[1544]: time="2026-03-06T03:02:49.888732807Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:49.895099 containerd[1544]: time="2026-03-06T03:02:49.894626117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:49.897584 containerd[1544]: time="2026-03-06T03:02:49.897519882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.680074081s" Mar 6 03:02:49.897864 containerd[1544]: time="2026-03-06T03:02:49.897835571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 6 03:02:49.900794 containerd[1544]: time="2026-03-06T03:02:49.900757252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 6 03:02:49.904613 containerd[1544]: time="2026-03-06T03:02:49.904582864Z" level=info msg="CreateContainer within sandbox \"5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 6 03:02:49.922306 containerd[1544]: time="2026-03-06T03:02:49.922253069Z" level=info msg="Container a959e9b7b07aeb1098646f6ea3f7c093238d51a101bd31bb8a48e52f0e5e0912: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:49.946729 containerd[1544]: time="2026-03-06T03:02:49.945869375Z" level=info msg="CreateContainer within sandbox \"5d6e4ca2186af223590a11a717f6381c87c129dd60f6d367070bcb1584ee6a21\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a959e9b7b07aeb1098646f6ea3f7c093238d51a101bd31bb8a48e52f0e5e0912\"" Mar 6 03:02:49.949294 containerd[1544]: time="2026-03-06T03:02:49.949242456Z" level=info msg="StartContainer for \"a959e9b7b07aeb1098646f6ea3f7c093238d51a101bd31bb8a48e52f0e5e0912\"" Mar 6 03:02:49.953234 containerd[1544]: time="2026-03-06T03:02:49.953199579Z" level=info msg="connecting to shim a959e9b7b07aeb1098646f6ea3f7c093238d51a101bd31bb8a48e52f0e5e0912" address="unix:///run/containerd/s/249990abe4addc9f451bdc20469919b1764f2abdbe7a3d79a7c228295adb55ca" protocol=ttrpc version=3 Mar 6 03:02:50.018148 systemd[1]: Started cri-containerd-a959e9b7b07aeb1098646f6ea3f7c093238d51a101bd31bb8a48e52f0e5e0912.scope - libcontainer container a959e9b7b07aeb1098646f6ea3f7c093238d51a101bd31bb8a48e52f0e5e0912. Mar 6 03:02:50.126201 containerd[1544]: time="2026-03-06T03:02:50.126135711Z" level=info msg="StartContainer for \"a959e9b7b07aeb1098646f6ea3f7c093238d51a101bd31bb8a48e52f0e5e0912\" returns successfully" Mar 6 03:02:50.911224 containerd[1544]: time="2026-03-06T03:02:50.911152183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:50.912653 containerd[1544]: time="2026-03-06T03:02:50.912414581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 6 03:02:50.913921 containerd[1544]: time="2026-03-06T03:02:50.913877785Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:50.916895 containerd[1544]: time="2026-03-06T03:02:50.916860168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:50.918103 containerd[1544]: time="2026-03-06T03:02:50.917743624Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.016682862s" Mar 6 03:02:50.918103 containerd[1544]: time="2026-03-06T03:02:50.917788427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 6 03:02:50.923300 containerd[1544]: time="2026-03-06T03:02:50.923264253Z" level=info msg="CreateContainer within sandbox \"13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 6 03:02:50.934935 containerd[1544]: time="2026-03-06T03:02:50.933580636Z" level=info msg="Container a91b2780efa05abc00ce3da4257eee8480b91764459105a63d6a7d7e1d687003: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:50.946576 containerd[1544]: time="2026-03-06T03:02:50.946523474Z" level=info msg="CreateContainer within sandbox \"13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a91b2780efa05abc00ce3da4257eee8480b91764459105a63d6a7d7e1d687003\"" Mar 6 03:02:50.947207 containerd[1544]: time="2026-03-06T03:02:50.947168776Z" level=info msg="StartContainer for \"a91b2780efa05abc00ce3da4257eee8480b91764459105a63d6a7d7e1d687003\"" Mar 6 03:02:50.949208 containerd[1544]: time="2026-03-06T03:02:50.949167335Z" level=info msg="connecting to shim a91b2780efa05abc00ce3da4257eee8480b91764459105a63d6a7d7e1d687003" address="unix:///run/containerd/s/844c916a51180a49eeda4dbcbb0ab4813acf7bc75e3878982f4680f852f19ba8" protocol=ttrpc version=3 Mar 6 03:02:50.987306 systemd[1]: Started cri-containerd-a91b2780efa05abc00ce3da4257eee8480b91764459105a63d6a7d7e1d687003.scope - libcontainer container a91b2780efa05abc00ce3da4257eee8480b91764459105a63d6a7d7e1d687003. Mar 6 03:02:51.056926 containerd[1544]: time="2026-03-06T03:02:51.056873086Z" level=info msg="StartContainer for \"a91b2780efa05abc00ce3da4257eee8480b91764459105a63d6a7d7e1d687003\" returns successfully" Mar 6 03:02:51.062221 containerd[1544]: time="2026-03-06T03:02:51.062176176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 6 03:02:51.087913 kubelet[2775]: I0306 03:02:51.087618 2775 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 6 03:02:51.087913 kubelet[2775]: I0306 03:02:51.087679 2775 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 6 03:02:51.218397 systemd-networkd[1410]: vxlan.calico: Gained IPv6LL Mar 6 03:02:52.352137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3309930829.mount: Deactivated successfully. Mar 6 03:02:52.374973 containerd[1544]: time="2026-03-06T03:02:52.374911372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:52.376478 containerd[1544]: time="2026-03-06T03:02:52.376423070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 6 03:02:52.377638 containerd[1544]: time="2026-03-06T03:02:52.377466441Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:52.380452 containerd[1544]: time="2026-03-06T03:02:52.380417721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:52.381650 containerd[1544]: time="2026-03-06T03:02:52.381453367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.319226121s" Mar 6 03:02:52.381650 containerd[1544]: time="2026-03-06T03:02:52.381495917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 6 03:02:52.386542 containerd[1544]: time="2026-03-06T03:02:52.386423393Z" level=info msg="CreateContainer within sandbox \"13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 6 03:02:52.396497 containerd[1544]: time="2026-03-06T03:02:52.396220724Z" level=info msg="Container 9ed9dc09bf3cc85a737b512c5c2d85dedff55876a393859b79539ec8f517bdde: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:52.410774 containerd[1544]: time="2026-03-06T03:02:52.410715945Z" level=info msg="CreateContainer within sandbox \"13b0fe8674e507bc6fcac791cbdee335f0feba2d6e4590015a42a727e450cbe4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9ed9dc09bf3cc85a737b512c5c2d85dedff55876a393859b79539ec8f517bdde\"" Mar 6 03:02:52.411536 containerd[1544]: time="2026-03-06T03:02:52.411403901Z" level=info msg="StartContainer for \"9ed9dc09bf3cc85a737b512c5c2d85dedff55876a393859b79539ec8f517bdde\"" Mar 6 03:02:52.413387 containerd[1544]: time="2026-03-06T03:02:52.413353145Z" level=info msg="connecting to shim 9ed9dc09bf3cc85a737b512c5c2d85dedff55876a393859b79539ec8f517bdde" address="unix:///run/containerd/s/844c916a51180a49eeda4dbcbb0ab4813acf7bc75e3878982f4680f852f19ba8" protocol=ttrpc version=3 Mar 6 03:02:52.444484 systemd[1]: Started cri-containerd-9ed9dc09bf3cc85a737b512c5c2d85dedff55876a393859b79539ec8f517bdde.scope - libcontainer container 9ed9dc09bf3cc85a737b512c5c2d85dedff55876a393859b79539ec8f517bdde. Mar 6 03:02:52.524790 containerd[1544]: time="2026-03-06T03:02:52.524703006Z" level=info msg="StartContainer for \"9ed9dc09bf3cc85a737b512c5c2d85dedff55876a393859b79539ec8f517bdde\" returns successfully" Mar 6 03:02:53.284807 kubelet[2775]: I0306 03:02:53.284264 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fkzvq" podStartSLOduration=23.859002869 podStartE2EDuration="27.284238948s" podCreationTimestamp="2026-03-06 03:02:26 +0000 UTC" firstStartedPulling="2026-03-06 03:02:46.474161546 +0000 UTC m=+41.756350829" lastFinishedPulling="2026-03-06 03:02:49.899397608 +0000 UTC m=+45.181586908" observedRunningTime="2026-03-06 03:02:50.276151954 +0000 UTC m=+45.558341260" watchObservedRunningTime="2026-03-06 03:02:53.284238948 +0000 UTC m=+48.566428262" Mar 6 03:02:53.287455 kubelet[2775]: I0306 03:02:53.285830 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-d4c6845f8-rpvhn" podStartSLOduration=2.273396327 podStartE2EDuration="6.285813575s" podCreationTimestamp="2026-03-06 03:02:47 +0000 UTC" firstStartedPulling="2026-03-06 03:02:48.370356873 +0000 UTC m=+43.652546171" lastFinishedPulling="2026-03-06 03:02:52.382774139 +0000 UTC m=+47.664963419" observedRunningTime="2026-03-06 03:02:53.283988846 +0000 UTC m=+48.566178178" watchObservedRunningTime="2026-03-06 03:02:53.285813575 +0000 UTC m=+48.568002878" Mar 6 03:02:53.798315 ntpd[1650]: Listen normally on 6 vxlan.calico 192.168.21.129:123 Mar 6 03:02:53.798401 ntpd[1650]: Listen normally on 7 cali357e2024036 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 6 03:02:53.798864 ntpd[1650]: 6 Mar 03:02:53 ntpd[1650]: Listen normally on 6 vxlan.calico 192.168.21.129:123 Mar 6 03:02:53.798864 ntpd[1650]: 6 Mar 03:02:53 ntpd[1650]: Listen normally on 7 cali357e2024036 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 6 03:02:53.798864 ntpd[1650]: 6 Mar 03:02:53 ntpd[1650]: Listen normally on 8 cali5b2f9e8b4de [fe80::ecee:eeff:feee:eeee%5]:123 Mar 6 03:02:53.798864 ntpd[1650]: 6 Mar 03:02:53 ntpd[1650]: Listen normally on 9 vxlan.calico [fe80::64df:32ff:fee2:7bdc%6]:123 Mar 6 03:02:53.798445 ntpd[1650]: Listen normally on 8 cali5b2f9e8b4de [fe80::ecee:eeff:feee:eeee%5]:123 Mar 6 03:02:53.798486 ntpd[1650]: Listen normally on 9 vxlan.calico [fe80::64df:32ff:fee2:7bdc%6]:123 Mar 6 03:02:55.970269 containerd[1544]: time="2026-03-06T03:02:55.970206659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b8cdfbccd-xdsjp,Uid:6617c0cc-bec3-47a6-9592-47297d1680a9,Namespace:calico-system,Attempt:0,}" Mar 6 03:02:56.169009 systemd-networkd[1410]: cali78e7218333c: Link UP Mar 6 03:02:56.170471 systemd-networkd[1410]: cali78e7218333c: Gained carrier Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.076 [INFO][4415] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--kube--controllers--7b8cdfbccd--xdsjp-eth0 calico-kube-controllers-7b8cdfbccd- calico-system 6617c0cc-bec3-47a6-9592-47297d1680a9 842 0 2026-03-06 03:02:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b8cdfbccd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011 calico-kube-controllers-7b8cdfbccd-xdsjp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali78e7218333c [] [] }} ContainerID="df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" Namespace="calico-system" Pod="calico-kube-controllers-7b8cdfbccd-xdsjp" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--kube--controllers--7b8cdfbccd--xdsjp-" Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.077 [INFO][4415] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" Namespace="calico-system" Pod="calico-kube-controllers-7b8cdfbccd-xdsjp" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--kube--controllers--7b8cdfbccd--xdsjp-eth0" Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.115 [INFO][4428] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" HandleID="k8s-pod-network.df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--kube--controllers--7b8cdfbccd--xdsjp-eth0" Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.126 [INFO][4428] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" HandleID="k8s-pod-network.df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--kube--controllers--7b8cdfbccd--xdsjp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036f910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", "pod":"calico-kube-controllers-7b8cdfbccd-xdsjp", "timestamp":"2026-03-06 03:02:56.115989124 +0000 UTC"}, Hostname:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000536f20)} Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.126 [INFO][4428] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.126 [INFO][4428] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.126 [INFO][4428] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011' Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.129 [INFO][4428] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.136 [INFO][4428] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.142 [INFO][4428] ipam/ipam.go 526: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.144 [INFO][4428] ipam/ipam.go 160: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.147 [INFO][4428] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.147 [INFO][4428] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.148 [INFO][4428] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320 Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.153 [INFO][4428] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:56.194777 containerd[1544]: 2026-03-06 03:02:56.161 [INFO][4428] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.21.131/26] block=192.168.21.128/26 handle="k8s-pod-network.df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:56.196524 containerd[1544]: 2026-03-06 03:02:56.161 [INFO][4428] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.21.131/26] handle="k8s-pod-network.df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:56.196524 containerd[1544]: 2026-03-06 03:02:56.161 [INFO][4428] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:02:56.196524 containerd[1544]: 2026-03-06 03:02:56.161 [INFO][4428] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.21.131/26] IPv6=[] ContainerID="df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" HandleID="k8s-pod-network.df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--kube--controllers--7b8cdfbccd--xdsjp-eth0" Mar 6 03:02:56.196524 containerd[1544]: 2026-03-06 03:02:56.164 [INFO][4415] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" Namespace="calico-system" Pod="calico-kube-controllers-7b8cdfbccd-xdsjp" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--kube--controllers--7b8cdfbccd--xdsjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--kube--controllers--7b8cdfbccd--xdsjp-eth0", GenerateName:"calico-kube-controllers-7b8cdfbccd-", Namespace:"calico-system", SelfLink:"", UID:"6617c0cc-bec3-47a6-9592-47297d1680a9", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b8cdfbccd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"", Pod:"calico-kube-controllers-7b8cdfbccd-xdsjp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78e7218333c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:02:56.196524 containerd[1544]: 2026-03-06 03:02:56.164 [INFO][4415] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.131/32] ContainerID="df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" Namespace="calico-system" Pod="calico-kube-controllers-7b8cdfbccd-xdsjp" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--kube--controllers--7b8cdfbccd--xdsjp-eth0" Mar 6 03:02:56.196524 containerd[1544]: 2026-03-06 03:02:56.164 [INFO][4415] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78e7218333c ContainerID="df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" Namespace="calico-system" Pod="calico-kube-controllers-7b8cdfbccd-xdsjp" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--kube--controllers--7b8cdfbccd--xdsjp-eth0" Mar 6 03:02:56.196524 containerd[1544]: 2026-03-06 03:02:56.171 [INFO][4415] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" Namespace="calico-system" Pod="calico-kube-controllers-7b8cdfbccd-xdsjp" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--kube--controllers--7b8cdfbccd--xdsjp-eth0" Mar 6 03:02:56.197977 containerd[1544]: 2026-03-06 03:02:56.172 [INFO][4415] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" Namespace="calico-system" Pod="calico-kube-controllers-7b8cdfbccd-xdsjp" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--kube--controllers--7b8cdfbccd--xdsjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--kube--controllers--7b8cdfbccd--xdsjp-eth0", GenerateName:"calico-kube-controllers-7b8cdfbccd-", Namespace:"calico-system", SelfLink:"", UID:"6617c0cc-bec3-47a6-9592-47297d1680a9", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b8cdfbccd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320", Pod:"calico-kube-controllers-7b8cdfbccd-xdsjp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78e7218333c", MAC:"5e:58:d5:60:e4:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:02:56.197977 containerd[1544]: 2026-03-06 03:02:56.186 [INFO][4415] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" Namespace="calico-system" Pod="calico-kube-controllers-7b8cdfbccd-xdsjp" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--kube--controllers--7b8cdfbccd--xdsjp-eth0" Mar 6 03:02:56.251067 containerd[1544]: time="2026-03-06T03:02:56.250894023Z" level=info msg="connecting to shim df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320" address="unix:///run/containerd/s/128008a51762fe98e7d9807149129343f3e6c5fa835557948e7b2fd5f02b114d" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:56.303357 systemd[1]: Started cri-containerd-df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320.scope - libcontainer container df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320. Mar 6 03:02:56.378959 containerd[1544]: time="2026-03-06T03:02:56.378902968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b8cdfbccd-xdsjp,Uid:6617c0cc-bec3-47a6-9592-47297d1680a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320\"" Mar 6 03:02:56.381310 containerd[1544]: time="2026-03-06T03:02:56.381277754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 6 03:02:56.969421 containerd[1544]: time="2026-03-06T03:02:56.968867299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-mk9x6,Uid:deaaae01-9814-41f5-a6d2-275edfa5df6e,Namespace:calico-system,Attempt:0,}" Mar 6 03:02:57.118008 systemd-networkd[1410]: cali61d90f52799: Link UP Mar 6 03:02:57.119522 systemd-networkd[1410]: cali61d90f52799: Gained carrier Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.025 [INFO][4498] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-goldmane--5b85766d88--mk9x6-eth0 goldmane-5b85766d88- calico-system deaaae01-9814-41f5-a6d2-275edfa5df6e 846 0 2026-03-06 03:02:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011 goldmane-5b85766d88-mk9x6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali61d90f52799 [] [] }} ContainerID="258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" Namespace="calico-system" Pod="goldmane-5b85766d88-mk9x6" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-goldmane--5b85766d88--mk9x6-" Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.026 [INFO][4498] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" Namespace="calico-system" Pod="goldmane-5b85766d88-mk9x6" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-goldmane--5b85766d88--mk9x6-eth0" Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.065 [INFO][4509] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" HandleID="k8s-pod-network.258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-goldmane--5b85766d88--mk9x6-eth0" Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.075 [INFO][4509] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" HandleID="k8s-pod-network.258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-goldmane--5b85766d88--mk9x6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fddc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", "pod":"goldmane-5b85766d88-mk9x6", "timestamp":"2026-03-06 03:02:57.065963363 +0000 UTC"}, Hostname:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00054b1e0)} Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.075 [INFO][4509] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.075 [INFO][4509] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.075 [INFO][4509] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011' Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.078 [INFO][4509] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.084 [INFO][4509] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.090 [INFO][4509] ipam/ipam.go 526: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.092 [INFO][4509] ipam/ipam.go 160: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.095 [INFO][4509] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.095 [INFO][4509] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.097 [INFO][4509] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457 Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.103 [INFO][4509] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.111 [INFO][4509] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.21.132/26] block=192.168.21.128/26 handle="k8s-pod-network.258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.111 [INFO][4509] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.21.132/26] handle="k8s-pod-network.258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:57.142542 containerd[1544]: 2026-03-06 03:02:57.111 [INFO][4509] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:02:57.146175 containerd[1544]: 2026-03-06 03:02:57.111 [INFO][4509] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.21.132/26] IPv6=[] ContainerID="258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" HandleID="k8s-pod-network.258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-goldmane--5b85766d88--mk9x6-eth0" Mar 6 03:02:57.146175 containerd[1544]: 2026-03-06 03:02:57.114 [INFO][4498] cni-plugin/k8s.go 418: Populated endpoint ContainerID="258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" Namespace="calico-system" Pod="goldmane-5b85766d88-mk9x6" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-goldmane--5b85766d88--mk9x6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-goldmane--5b85766d88--mk9x6-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"deaaae01-9814-41f5-a6d2-275edfa5df6e", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"", Pod:"goldmane-5b85766d88-mk9x6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61d90f52799", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:02:57.146175 containerd[1544]: 2026-03-06 03:02:57.114 [INFO][4498] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.132/32] ContainerID="258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" Namespace="calico-system" Pod="goldmane-5b85766d88-mk9x6" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-goldmane--5b85766d88--mk9x6-eth0" Mar 6 03:02:57.146175 containerd[1544]: 2026-03-06 03:02:57.114 [INFO][4498] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61d90f52799 ContainerID="258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" Namespace="calico-system" Pod="goldmane-5b85766d88-mk9x6" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-goldmane--5b85766d88--mk9x6-eth0" Mar 6 03:02:57.146175 containerd[1544]: 2026-03-06 03:02:57.120 [INFO][4498] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" Namespace="calico-system" Pod="goldmane-5b85766d88-mk9x6" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-goldmane--5b85766d88--mk9x6-eth0" Mar 6 03:02:57.146175 containerd[1544]: 2026-03-06 03:02:57.120 [INFO][4498] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" Namespace="calico-system" Pod="goldmane-5b85766d88-mk9x6" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-goldmane--5b85766d88--mk9x6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-goldmane--5b85766d88--mk9x6-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"deaaae01-9814-41f5-a6d2-275edfa5df6e", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457", Pod:"goldmane-5b85766d88-mk9x6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61d90f52799", MAC:"36:f2:05:4e:52:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:02:57.146859 containerd[1544]: 2026-03-06 03:02:57.137 [INFO][4498] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" Namespace="calico-system" Pod="goldmane-5b85766d88-mk9x6" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-goldmane--5b85766d88--mk9x6-eth0" Mar 6 03:02:57.197612 containerd[1544]: time="2026-03-06T03:02:57.197494826Z" level=info msg="connecting to shim 258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457" address="unix:///run/containerd/s/55cad082a0d2e4c66069264c69598488e340645cb91dcb871a4ea1714141cc67" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:57.262400 systemd[1]: Started cri-containerd-258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457.scope - libcontainer container 258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457. Mar 6 03:02:57.401332 containerd[1544]: time="2026-03-06T03:02:57.401283261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-mk9x6,Uid:deaaae01-9814-41f5-a6d2-275edfa5df6e,Namespace:calico-system,Attempt:0,} returns sandbox id \"258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457\"" Mar 6 03:02:57.426635 systemd-networkd[1410]: cali78e7218333c: Gained IPv6LL Mar 6 03:02:57.968837 containerd[1544]: time="2026-03-06T03:02:57.968479034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8pvx7,Uid:d2f1f201-33f8-4652-8176-bbf25d763a0a,Namespace:kube-system,Attempt:0,}" Mar 6 03:02:58.185874 systemd-networkd[1410]: cali90f9615c2fd: Link UP Mar 6 03:02:58.186260 systemd-networkd[1410]: cali90f9615c2fd: Gained carrier Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.056 [INFO][4591] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--8pvx7-eth0 coredns-674b8bbfcf- kube-system d2f1f201-33f8-4652-8176-bbf25d763a0a 843 0 2026-03-06 03:02:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011 coredns-674b8bbfcf-8pvx7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali90f9615c2fd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pvx7" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--8pvx7-" Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.056 [INFO][4591] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pvx7" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--8pvx7-eth0" Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.116 [INFO][4603] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" HandleID="k8s-pod-network.adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--8pvx7-eth0" Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.130 [INFO][4603] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" HandleID="k8s-pod-network.adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--8pvx7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efe90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", "pod":"coredns-674b8bbfcf-8pvx7", "timestamp":"2026-03-06 03:02:58.116990099 +0000 UTC"}, Hostname:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f82c0)} Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.131 [INFO][4603] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.131 [INFO][4603] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.131 [INFO][4603] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011' Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.136 [INFO][4603] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.141 [INFO][4603] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.147 [INFO][4603] ipam/ipam.go 526: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.151 [INFO][4603] ipam/ipam.go 160: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.153 [INFO][4603] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.154 [INFO][4603] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.156 [INFO][4603] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.162 [INFO][4603] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.173 [INFO][4603] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.21.133/26] block=192.168.21.128/26 handle="k8s-pod-network.adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.173 [INFO][4603] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.21.133/26] handle="k8s-pod-network.adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:02:58.216908 containerd[1544]: 2026-03-06 03:02:58.174 [INFO][4603] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:02:58.221283 containerd[1544]: 2026-03-06 03:02:58.174 [INFO][4603] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.21.133/26] IPv6=[] ContainerID="adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" HandleID="k8s-pod-network.adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--8pvx7-eth0" Mar 6 03:02:58.221283 containerd[1544]: 2026-03-06 03:02:58.177 [INFO][4591] cni-plugin/k8s.go 418: Populated endpoint ContainerID="adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pvx7" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--8pvx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--8pvx7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d2f1f201-33f8-4652-8176-bbf25d763a0a", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"", Pod:"coredns-674b8bbfcf-8pvx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90f9615c2fd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:02:58.221283 containerd[1544]: 2026-03-06 03:02:58.178 [INFO][4591] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.133/32] ContainerID="adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pvx7" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--8pvx7-eth0" Mar 6 03:02:58.221283 containerd[1544]: 2026-03-06 03:02:58.178 [INFO][4591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90f9615c2fd ContainerID="adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pvx7" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--8pvx7-eth0" Mar 6 03:02:58.221283 containerd[1544]: 2026-03-06 03:02:58.188 [INFO][4591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pvx7" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--8pvx7-eth0" Mar 6 03:02:58.221996 containerd[1544]: 2026-03-06 03:02:58.191 [INFO][4591] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pvx7" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--8pvx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--8pvx7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d2f1f201-33f8-4652-8176-bbf25d763a0a", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a", Pod:"coredns-674b8bbfcf-8pvx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90f9615c2fd", MAC:"3a:b4:61:07:59:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:02:58.221996 containerd[1544]: 2026-03-06 03:02:58.212 [INFO][4591] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pvx7" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--8pvx7-eth0" Mar 6 03:02:58.317764 containerd[1544]: time="2026-03-06T03:02:58.317362582Z" level=info msg="connecting to shim adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a" address="unix:///run/containerd/s/e8c9659edfa99463ce828a5e9980f21ec22cd7469368c2261f5ce9aa789b13e0" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:02:58.395294 systemd[1]: Started cri-containerd-adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a.scope - libcontainer container adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a. Mar 6 03:02:58.450343 systemd-networkd[1410]: cali61d90f52799: Gained IPv6LL Mar 6 03:02:58.547431 containerd[1544]: time="2026-03-06T03:02:58.547228759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8pvx7,Uid:d2f1f201-33f8-4652-8176-bbf25d763a0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a\"" Mar 6 03:02:58.555997 containerd[1544]: time="2026-03-06T03:02:58.555915950Z" level=info msg="CreateContainer within sandbox \"adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 03:02:58.576126 containerd[1544]: time="2026-03-06T03:02:58.575055188Z" level=info msg="Container adfcf972ed3bc98109d90716d21cd5017cfd8b20622883f3891e0387bb0c59f3: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:58.589857 containerd[1544]: time="2026-03-06T03:02:58.589800563Z" level=info msg="CreateContainer within sandbox \"adbb438d97d49fcd1375cddc4d41c67dd924ac7660babd5f7883717d6ca3992a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"adfcf972ed3bc98109d90716d21cd5017cfd8b20622883f3891e0387bb0c59f3\"" Mar 6 03:02:58.594861 containerd[1544]: time="2026-03-06T03:02:58.594677455Z" level=info msg="StartContainer for \"adfcf972ed3bc98109d90716d21cd5017cfd8b20622883f3891e0387bb0c59f3\"" Mar 6 03:02:58.600096 containerd[1544]: time="2026-03-06T03:02:58.599894646Z" level=info msg="connecting to shim adfcf972ed3bc98109d90716d21cd5017cfd8b20622883f3891e0387bb0c59f3" address="unix:///run/containerd/s/e8c9659edfa99463ce828a5e9980f21ec22cd7469368c2261f5ce9aa789b13e0" protocol=ttrpc version=3 Mar 6 03:02:58.644475 systemd[1]: Started cri-containerd-adfcf972ed3bc98109d90716d21cd5017cfd8b20622883f3891e0387bb0c59f3.scope - libcontainer container adfcf972ed3bc98109d90716d21cd5017cfd8b20622883f3891e0387bb0c59f3. Mar 6 03:02:58.714107 containerd[1544]: time="2026-03-06T03:02:58.712893659Z" level=info msg="StartContainer for \"adfcf972ed3bc98109d90716d21cd5017cfd8b20622883f3891e0387bb0c59f3\" returns successfully" Mar 6 03:02:59.321802 kubelet[2775]: I0306 03:02:59.320712 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8pvx7" podStartSLOduration=49.320690361 podStartE2EDuration="49.320690361s" podCreationTimestamp="2026-03-06 03:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:02:59.32067839 +0000 UTC m=+54.602867694" watchObservedRunningTime="2026-03-06 03:02:59.320690361 +0000 UTC m=+54.602879675" Mar 6 03:02:59.629571 containerd[1544]: time="2026-03-06T03:02:59.629499689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:59.631109 containerd[1544]: time="2026-03-06T03:02:59.630830994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 6 03:02:59.632333 containerd[1544]: time="2026-03-06T03:02:59.632291283Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:59.637024 containerd[1544]: time="2026-03-06T03:02:59.636965656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:02:59.638350 containerd[1544]: time="2026-03-06T03:02:59.638159693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.256413832s" Mar 6 03:02:59.638350 containerd[1544]: time="2026-03-06T03:02:59.638204664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 6 03:02:59.641009 containerd[1544]: time="2026-03-06T03:02:59.640976925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 6 03:02:59.666521 containerd[1544]: time="2026-03-06T03:02:59.666435777Z" level=info msg="CreateContainer within sandbox \"df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 6 03:02:59.678113 containerd[1544]: time="2026-03-06T03:02:59.676228461Z" level=info msg="Container 5cb78fda575e261ddc11dc3a33270ef2cc53917e2fc65f03e58e0677f4b0a7fc: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:02:59.695378 containerd[1544]: time="2026-03-06T03:02:59.695246479Z" level=info msg="CreateContainer within sandbox \"df1a2344bdbbaf6b46f15f1ebc95ac3e66ef8db0cb6f43a559b14d02960a3320\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5cb78fda575e261ddc11dc3a33270ef2cc53917e2fc65f03e58e0677f4b0a7fc\"" Mar 6 03:02:59.698102 containerd[1544]: time="2026-03-06T03:02:59.697317797Z" level=info msg="StartContainer for \"5cb78fda575e261ddc11dc3a33270ef2cc53917e2fc65f03e58e0677f4b0a7fc\"" Mar 6 03:02:59.699807 containerd[1544]: time="2026-03-06T03:02:59.699756638Z" level=info msg="connecting to shim 5cb78fda575e261ddc11dc3a33270ef2cc53917e2fc65f03e58e0677f4b0a7fc" address="unix:///run/containerd/s/128008a51762fe98e7d9807149129343f3e6c5fa835557948e7b2fd5f02b114d" protocol=ttrpc version=3 Mar 6 03:02:59.734388 systemd[1]: Started cri-containerd-5cb78fda575e261ddc11dc3a33270ef2cc53917e2fc65f03e58e0677f4b0a7fc.scope - libcontainer container 5cb78fda575e261ddc11dc3a33270ef2cc53917e2fc65f03e58e0677f4b0a7fc. Mar 6 03:02:59.816305 containerd[1544]: time="2026-03-06T03:02:59.816166390Z" level=info msg="StartContainer for \"5cb78fda575e261ddc11dc3a33270ef2cc53917e2fc65f03e58e0677f4b0a7fc\" returns successfully" Mar 6 03:02:59.969171 containerd[1544]: time="2026-03-06T03:02:59.968620709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7765f8b4cc-f727z,Uid:d14ba1a7-66d0-474e-934c-ba181579d0e6,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:00.050309 systemd-networkd[1410]: cali90f9615c2fd: Gained IPv6LL Mar 6 03:03:00.126917 systemd-networkd[1410]: cali9aebf76dcc8: Link UP Mar 6 03:03:00.128520 systemd-networkd[1410]: cali9aebf76dcc8: Gained carrier Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.016 [INFO][4757] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--f727z-eth0 calico-apiserver-7765f8b4cc- calico-system d14ba1a7-66d0-474e-934c-ba181579d0e6 849 0 2026-03-06 03:02:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7765f8b4cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011 calico-apiserver-7765f8b4cc-f727z eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9aebf76dcc8 [] [] }} ContainerID="c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-f727z" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--f727z-" Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.016 [INFO][4757] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-f727z" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--f727z-eth0" Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.057 [INFO][4768] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" HandleID="k8s-pod-network.c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--f727z-eth0" Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.073 [INFO][4768] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" HandleID="k8s-pod-network.c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--f727z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000381d20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", "pod":"calico-apiserver-7765f8b4cc-f727z", "timestamp":"2026-03-06 03:03:00.057913181 +0000 UTC"}, Hostname:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000620420)} Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.073 [INFO][4768] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.073 [INFO][4768] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.073 [INFO][4768] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011' Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.076 [INFO][4768] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.082 [INFO][4768] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.091 [INFO][4768] ipam/ipam.go 526: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.094 [INFO][4768] ipam/ipam.go 160: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.099 [INFO][4768] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.100 [INFO][4768] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.102 [INFO][4768] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.108 [INFO][4768] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:00.149949 containerd[1544]: 2026-03-06 03:03:00.117 [INFO][4768] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.21.134/26] block=192.168.21.128/26 handle="k8s-pod-network.c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:00.150941 containerd[1544]: 2026-03-06 03:03:00.117 [INFO][4768] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.21.134/26] handle="k8s-pod-network.c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:00.150941 containerd[1544]: 2026-03-06 03:03:00.117 [INFO][4768] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:03:00.150941 containerd[1544]: 2026-03-06 03:03:00.117 [INFO][4768] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.21.134/26] IPv6=[] ContainerID="c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" HandleID="k8s-pod-network.c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--f727z-eth0" Mar 6 03:03:00.150941 containerd[1544]: 2026-03-06 03:03:00.121 [INFO][4757] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-f727z" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--f727z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--f727z-eth0", GenerateName:"calico-apiserver-7765f8b4cc-", Namespace:"calico-system", SelfLink:"", UID:"d14ba1a7-66d0-474e-934c-ba181579d0e6", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7765f8b4cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"", Pod:"calico-apiserver-7765f8b4cc-f727z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9aebf76dcc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:00.150941 containerd[1544]: 2026-03-06 03:03:00.121 [INFO][4757] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.134/32] ContainerID="c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-f727z" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--f727z-eth0" Mar 6 03:03:00.150941 containerd[1544]: 2026-03-06 03:03:00.121 [INFO][4757] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9aebf76dcc8 ContainerID="c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-f727z" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--f727z-eth0" Mar 6 03:03:00.150941 containerd[1544]: 2026-03-06 03:03:00.129 [INFO][4757] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-f727z" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--f727z-eth0" Mar 6 03:03:00.153488 containerd[1544]: 2026-03-06 03:03:00.129 [INFO][4757] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-f727z" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--f727z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--f727z-eth0", GenerateName:"calico-apiserver-7765f8b4cc-", Namespace:"calico-system", SelfLink:"", UID:"d14ba1a7-66d0-474e-934c-ba181579d0e6", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7765f8b4cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f", Pod:"calico-apiserver-7765f8b4cc-f727z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9aebf76dcc8", MAC:"9e:d4:c0:b0:ca:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:00.153488 containerd[1544]: 2026-03-06 03:03:00.143 [INFO][4757] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-f727z" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--f727z-eth0" Mar 6 03:03:00.200963 containerd[1544]: time="2026-03-06T03:03:00.200600183Z" level=info msg="connecting to shim c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f" address="unix:///run/containerd/s/5a6ee9f8abbb0b9e7130f172cfbee7bf4804a321f9fc85fd80857ea66d6bdad0" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:00.248347 systemd[1]: Started cri-containerd-c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f.scope - libcontainer container c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f. Mar 6 03:03:00.351375 kubelet[2775]: I0306 03:03:00.351059 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b8cdfbccd-xdsjp" podStartSLOduration=31.092067072 podStartE2EDuration="34.351033205s" podCreationTimestamp="2026-03-06 03:02:26 +0000 UTC" firstStartedPulling="2026-03-06 03:02:56.380882265 +0000 UTC m=+51.663071544" lastFinishedPulling="2026-03-06 03:02:59.63984838 +0000 UTC m=+54.922037677" observedRunningTime="2026-03-06 03:03:00.348058801 +0000 UTC m=+55.630248108" watchObservedRunningTime="2026-03-06 03:03:00.351033205 +0000 UTC m=+55.633222505" Mar 6 03:03:00.397920 containerd[1544]: time="2026-03-06T03:03:00.397851389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7765f8b4cc-f727z,Uid:d14ba1a7-66d0-474e-934c-ba181579d0e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f\"" Mar 6 03:03:00.970380 containerd[1544]: time="2026-03-06T03:03:00.970330305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7765f8b4cc-c5j8p,Uid:de8c1808-0dd2-4e67-850d-f9912f93270e,Namespace:calico-system,Attempt:0,}" Mar 6 03:03:00.976961 containerd[1544]: time="2026-03-06T03:03:00.976911812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7jb7j,Uid:4ad502fb-71e7-439a-8f21-0ff5a8f05db0,Namespace:kube-system,Attempt:0,}" Mar 6 03:03:01.267631 systemd-networkd[1410]: cali0c76e9fed2f: Link UP Mar 6 03:03:01.269607 systemd-networkd[1410]: cali0c76e9fed2f: Gained carrier Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.058 [INFO][4885] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--7jb7j-eth0 coredns-674b8bbfcf- kube-system 4ad502fb-71e7-439a-8f21-0ff5a8f05db0 845 0 2026-03-06 03:02:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011 coredns-674b8bbfcf-7jb7j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0c76e9fed2f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jb7j" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--7jb7j-" Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.059 [INFO][4885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jb7j" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--7jb7j-eth0" Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.162 [INFO][4911] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" HandleID="k8s-pod-network.fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--7jb7j-eth0" Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.183 [INFO][4911] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" HandleID="k8s-pod-network.fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--7jb7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000408260), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", "pod":"coredns-674b8bbfcf-7jb7j", "timestamp":"2026-03-06 03:03:01.162980155 +0000 UTC"}, Hostname:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000456160)} Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.183 [INFO][4911] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.183 [INFO][4911] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.184 [INFO][4911] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011' Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.188 [INFO][4911] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.194 [INFO][4911] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.202 [INFO][4911] ipam/ipam.go 526: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.209 [INFO][4911] ipam/ipam.go 160: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.217 [INFO][4911] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.218 [INFO][4911] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.223 [INFO][4911] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.237 [INFO][4911] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.249 [INFO][4911] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.21.135/26] block=192.168.21.128/26 handle="k8s-pod-network.fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.251 [INFO][4911] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.21.135/26] handle="k8s-pod-network.fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.299993 containerd[1544]: 2026-03-06 03:03:01.252 [INFO][4911] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:03:01.302991 containerd[1544]: 2026-03-06 03:03:01.252 [INFO][4911] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.21.135/26] IPv6=[] ContainerID="fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" HandleID="k8s-pod-network.fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--7jb7j-eth0" Mar 6 03:03:01.302991 containerd[1544]: 2026-03-06 03:03:01.257 [INFO][4885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jb7j" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--7jb7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--7jb7j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4ad502fb-71e7-439a-8f21-0ff5a8f05db0", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"", Pod:"coredns-674b8bbfcf-7jb7j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c76e9fed2f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:01.302991 containerd[1544]: 2026-03-06 03:03:01.257 [INFO][4885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.135/32] ContainerID="fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jb7j" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--7jb7j-eth0" Mar 6 03:03:01.302991 containerd[1544]: 2026-03-06 03:03:01.257 [INFO][4885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c76e9fed2f ContainerID="fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jb7j" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--7jb7j-eth0" Mar 6 03:03:01.302991 containerd[1544]: 2026-03-06 03:03:01.270 [INFO][4885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jb7j" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--7jb7j-eth0" Mar 6 03:03:01.303449 containerd[1544]: 2026-03-06 03:03:01.271 [INFO][4885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jb7j" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--7jb7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--7jb7j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4ad502fb-71e7-439a-8f21-0ff5a8f05db0", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c", Pod:"coredns-674b8bbfcf-7jb7j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c76e9fed2f", MAC:"0a:02:85:ab:e1:7a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:01.303449 containerd[1544]: 2026-03-06 03:03:01.286 [INFO][4885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jb7j" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-coredns--674b8bbfcf--7jb7j-eth0" Mar 6 03:03:01.416068 systemd-networkd[1410]: cali72f084b63e2: Link UP Mar 6 03:03:01.416904 systemd-networkd[1410]: cali72f084b63e2: Gained carrier Mar 6 03:03:01.421046 containerd[1544]: time="2026-03-06T03:03:01.420852156Z" level=info msg="connecting to shim fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c" address="unix:///run/containerd/s/4968dfe82cf915982cdf158bf0053dcce971ea23db799452a75ab4afdb50ea4c" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.120 [INFO][4884] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--c5j8p-eth0 calico-apiserver-7765f8b4cc- calico-system de8c1808-0dd2-4e67-850d-f9912f93270e 848 0 2026-03-06 03:02:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7765f8b4cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011 calico-apiserver-7765f8b4cc-c5j8p eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali72f084b63e2 [] [] }} ContainerID="c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-c5j8p" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--c5j8p-" Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.120 [INFO][4884] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-c5j8p" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--c5j8p-eth0" Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.216 [INFO][4917] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" HandleID="k8s-pod-network.c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--c5j8p-eth0" Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.237 [INFO][4917] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" HandleID="k8s-pod-network.c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--c5j8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", "pod":"calico-apiserver-7765f8b4cc-c5j8p", "timestamp":"2026-03-06 03:03:01.216675229 +0000 UTC"}, Hostname:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000246c60)} Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.237 [INFO][4917] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.252 [INFO][4917] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.252 [INFO][4917] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011' Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.294 [INFO][4917] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.309 [INFO][4917] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.319 [INFO][4917] ipam/ipam.go 526: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.326 [INFO][4917] ipam/ipam.go 160: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.340 [INFO][4917] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.341 [INFO][4917] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.349 [INFO][4917] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.374 [INFO][4917] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.463318 containerd[1544]: 2026-03-06 03:03:01.394 [INFO][4917] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.21.136/26] block=192.168.21.128/26 handle="k8s-pod-network.c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.465714 containerd[1544]: 2026-03-06 03:03:01.394 [INFO][4917] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.21.136/26] handle="k8s-pod-network.c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" host="ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011" Mar 6 03:03:01.465714 containerd[1544]: 2026-03-06 03:03:01.394 [INFO][4917] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 03:03:01.465714 containerd[1544]: 2026-03-06 03:03:01.394 [INFO][4917] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.21.136/26] IPv6=[] ContainerID="c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" HandleID="k8s-pod-network.c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" Workload="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--c5j8p-eth0" Mar 6 03:03:01.465714 containerd[1544]: 2026-03-06 03:03:01.405 [INFO][4884] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-c5j8p" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--c5j8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--c5j8p-eth0", GenerateName:"calico-apiserver-7765f8b4cc-", Namespace:"calico-system", SelfLink:"", UID:"de8c1808-0dd2-4e67-850d-f9912f93270e", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7765f8b4cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"", Pod:"calico-apiserver-7765f8b4cc-c5j8p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali72f084b63e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:01.465714 containerd[1544]: 2026-03-06 03:03:01.405 [INFO][4884] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.136/32] ContainerID="c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-c5j8p" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--c5j8p-eth0" Mar 6 03:03:01.465714 containerd[1544]: 2026-03-06 03:03:01.406 [INFO][4884] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72f084b63e2 ContainerID="c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-c5j8p" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--c5j8p-eth0" Mar 6 03:03:01.465714 containerd[1544]: 2026-03-06 03:03:01.418 [INFO][4884] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-c5j8p" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--c5j8p-eth0" Mar 6 03:03:01.466628 containerd[1544]: 2026-03-06 03:03:01.421 [INFO][4884] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-c5j8p" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--c5j8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--c5j8p-eth0", GenerateName:"calico-apiserver-7765f8b4cc-", Namespace:"calico-system", SelfLink:"", UID:"de8c1808-0dd2-4e67-850d-f9912f93270e", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 3, 2, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7765f8b4cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-nightly-20260305-2100-dff6851aca2ec592b011", ContainerID:"c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec", Pod:"calico-apiserver-7765f8b4cc-c5j8p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali72f084b63e2", MAC:"56:eb:30:63:40:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 03:03:01.466628 containerd[1544]: 2026-03-06 03:03:01.450 [INFO][4884] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" Namespace="calico-system" Pod="calico-apiserver-7765f8b4cc-c5j8p" WorkloadEndpoint="ci--4459--2--3--nightly--20260305--2100--dff6851aca2ec592b011-k8s-calico--apiserver--7765f8b4cc--c5j8p-eth0" Mar 6 03:03:01.508310 systemd[1]: Started cri-containerd-fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c.scope - libcontainer container fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c. Mar 6 03:03:01.555597 containerd[1544]: time="2026-03-06T03:03:01.554486854Z" level=info msg="connecting to shim c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec" address="unix:///run/containerd/s/4a83a285fa93f48181edf7464499cc99409c32185cb1c4677c1e2192a0ca0d59" namespace=k8s.io protocol=ttrpc version=3 Mar 6 03:03:01.676820 systemd[1]: Started cri-containerd-c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec.scope - libcontainer container c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec. Mar 6 03:03:01.692722 containerd[1544]: time="2026-03-06T03:03:01.692679798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7jb7j,Uid:4ad502fb-71e7-439a-8f21-0ff5a8f05db0,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c\"" Mar 6 03:03:01.706252 containerd[1544]: time="2026-03-06T03:03:01.705729659Z" level=info msg="CreateContainer within sandbox \"fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 03:03:01.760047 containerd[1544]: time="2026-03-06T03:03:01.759815444Z" level=info msg="Container 9a3034692bc7533e14db469f4c195ccce7a2975d2d8b6480b428284ab91752b7: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:01.771989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3557968037.mount: Deactivated successfully. Mar 6 03:03:01.787889 containerd[1544]: time="2026-03-06T03:03:01.787517366Z" level=info msg="CreateContainer within sandbox \"fdff51298a5bc30c5b33af57bbe61bdc74576115b5f30f3528dbd74e7387866c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a3034692bc7533e14db469f4c195ccce7a2975d2d8b6480b428284ab91752b7\"" Mar 6 03:03:01.789390 containerd[1544]: time="2026-03-06T03:03:01.789358071Z" level=info msg="StartContainer for \"9a3034692bc7533e14db469f4c195ccce7a2975d2d8b6480b428284ab91752b7\"" Mar 6 03:03:01.791385 containerd[1544]: time="2026-03-06T03:03:01.791324676Z" level=info msg="connecting to shim 9a3034692bc7533e14db469f4c195ccce7a2975d2d8b6480b428284ab91752b7" address="unix:///run/containerd/s/4968dfe82cf915982cdf158bf0053dcce971ea23db799452a75ab4afdb50ea4c" protocol=ttrpc version=3 Mar 6 03:03:01.842259 systemd-networkd[1410]: cali9aebf76dcc8: Gained IPv6LL Mar 6 03:03:01.849238 systemd[1]: Started cri-containerd-9a3034692bc7533e14db469f4c195ccce7a2975d2d8b6480b428284ab91752b7.scope - libcontainer container 9a3034692bc7533e14db469f4c195ccce7a2975d2d8b6480b428284ab91752b7. Mar 6 03:03:01.959092 containerd[1544]: time="2026-03-06T03:03:01.959005904Z" level=info msg="StartContainer for \"9a3034692bc7533e14db469f4c195ccce7a2975d2d8b6480b428284ab91752b7\" returns successfully" Mar 6 03:03:01.975367 containerd[1544]: time="2026-03-06T03:03:01.974956345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7765f8b4cc-c5j8p,Uid:de8c1808-0dd2-4e67-850d-f9912f93270e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec\"" Mar 6 03:03:02.253427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1206241312.mount: Deactivated successfully. Mar 6 03:03:02.379238 kubelet[2775]: I0306 03:03:02.378883 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7jb7j" podStartSLOduration=52.378530065 podStartE2EDuration="52.378530065s" podCreationTimestamp="2026-03-06 03:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 03:03:02.371364606 +0000 UTC m=+57.653553910" watchObservedRunningTime="2026-03-06 03:03:02.378530065 +0000 UTC m=+57.660719367" Mar 6 03:03:02.548638 systemd-networkd[1410]: cali72f084b63e2: Gained IPv6LL Mar 6 03:03:02.610623 systemd-networkd[1410]: cali0c76e9fed2f: Gained IPv6LL Mar 6 03:03:03.015945 containerd[1544]: time="2026-03-06T03:03:03.015862525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:03.017676 containerd[1544]: time="2026-03-06T03:03:03.017598362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 6 03:03:03.019265 containerd[1544]: time="2026-03-06T03:03:03.018897495Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:03.021896 containerd[1544]: time="2026-03-06T03:03:03.021831526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:03.022921 containerd[1544]: time="2026-03-06T03:03:03.022771337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.381753407s" Mar 6 03:03:03.022921 containerd[1544]: time="2026-03-06T03:03:03.022814367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 6 03:03:03.024867 containerd[1544]: time="2026-03-06T03:03:03.024811935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 03:03:03.030115 containerd[1544]: time="2026-03-06T03:03:03.030039535Z" level=info msg="CreateContainer within sandbox \"258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 6 03:03:03.042731 containerd[1544]: time="2026-03-06T03:03:03.041224516Z" level=info msg="Container 834a5a11e2aa2ab9cec018e95577fe5a2abbd40001a636e422f618baab7faf44: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:03.055910 containerd[1544]: time="2026-03-06T03:03:03.055863657Z" level=info msg="CreateContainer within sandbox \"258e6fe52a37e9a6697532ac8725061db738ea22926f674a3d508aae38ce6457\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"834a5a11e2aa2ab9cec018e95577fe5a2abbd40001a636e422f618baab7faf44\"" Mar 6 03:03:03.058160 containerd[1544]: time="2026-03-06T03:03:03.056451790Z" level=info msg="StartContainer for \"834a5a11e2aa2ab9cec018e95577fe5a2abbd40001a636e422f618baab7faf44\"" Mar 6 03:03:03.058160 containerd[1544]: time="2026-03-06T03:03:03.058038564Z" level=info msg="connecting to shim 834a5a11e2aa2ab9cec018e95577fe5a2abbd40001a636e422f618baab7faf44" address="unix:///run/containerd/s/55cad082a0d2e4c66069264c69598488e340645cb91dcb871a4ea1714141cc67" protocol=ttrpc version=3 Mar 6 03:03:03.099301 systemd[1]: Started cri-containerd-834a5a11e2aa2ab9cec018e95577fe5a2abbd40001a636e422f618baab7faf44.scope - libcontainer container 834a5a11e2aa2ab9cec018e95577fe5a2abbd40001a636e422f618baab7faf44. Mar 6 03:03:03.176004 containerd[1544]: time="2026-03-06T03:03:03.175953652Z" level=info msg="StartContainer for \"834a5a11e2aa2ab9cec018e95577fe5a2abbd40001a636e422f618baab7faf44\" returns successfully" Mar 6 03:03:03.370803 kubelet[2775]: I0306 03:03:03.370724 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-mk9x6" podStartSLOduration=32.750158638 podStartE2EDuration="38.370699645s" podCreationTimestamp="2026-03-06 03:02:25 +0000 UTC" firstStartedPulling="2026-03-06 03:02:57.403664782 +0000 UTC m=+52.685854065" lastFinishedPulling="2026-03-06 03:03:03.02420578 +0000 UTC m=+58.306395072" observedRunningTime="2026-03-06 03:03:03.36944913 +0000 UTC m=+58.651638435" watchObservedRunningTime="2026-03-06 03:03:03.370699645 +0000 UTC m=+58.652888952" Mar 6 03:03:04.798391 ntpd[1650]: Listen normally on 10 cali78e7218333c [fe80::ecee:eeff:feee:eeee%9]:123 Mar 6 03:03:04.798932 ntpd[1650]: 6 Mar 03:03:04 ntpd[1650]: Listen normally on 10 cali78e7218333c [fe80::ecee:eeff:feee:eeee%9]:123 Mar 6 03:03:04.798932 ntpd[1650]: 6 Mar 03:03:04 ntpd[1650]: Listen normally on 11 cali61d90f52799 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 6 03:03:04.798932 ntpd[1650]: 6 Mar 03:03:04 ntpd[1650]: Listen normally on 12 cali90f9615c2fd [fe80::ecee:eeff:feee:eeee%11]:123 Mar 6 03:03:04.798932 ntpd[1650]: 6 Mar 03:03:04 ntpd[1650]: Listen normally on 13 cali9aebf76dcc8 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 6 03:03:04.798932 ntpd[1650]: 6 Mar 03:03:04 ntpd[1650]: Listen normally on 14 cali0c76e9fed2f [fe80::ecee:eeff:feee:eeee%13]:123 Mar 6 03:03:04.798932 ntpd[1650]: 6 Mar 03:03:04 ntpd[1650]: Listen normally on 15 cali72f084b63e2 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 6 03:03:04.798469 ntpd[1650]: Listen normally on 11 cali61d90f52799 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 6 03:03:04.798509 ntpd[1650]: Listen normally on 12 cali90f9615c2fd [fe80::ecee:eeff:feee:eeee%11]:123 Mar 6 03:03:04.798548 ntpd[1650]: Listen normally on 13 cali9aebf76dcc8 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 6 03:03:04.798586 ntpd[1650]: Listen normally on 14 cali0c76e9fed2f [fe80::ecee:eeff:feee:eeee%13]:123 Mar 6 03:03:04.798626 ntpd[1650]: Listen normally on 15 cali72f084b63e2 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 6 03:03:05.573374 containerd[1544]: time="2026-03-06T03:03:05.573308894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:05.574546 containerd[1544]: time="2026-03-06T03:03:05.574509575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 6 03:03:05.576551 containerd[1544]: time="2026-03-06T03:03:05.576179163Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:05.580664 containerd[1544]: time="2026-03-06T03:03:05.580601525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:05.581813 containerd[1544]: time="2026-03-06T03:03:05.581636104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.556781532s" Mar 6 03:03:05.581813 containerd[1544]: time="2026-03-06T03:03:05.581682572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 03:03:05.583301 containerd[1544]: time="2026-03-06T03:03:05.583052085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 03:03:05.588117 containerd[1544]: time="2026-03-06T03:03:05.587636749Z" level=info msg="CreateContainer within sandbox \"c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 03:03:05.600099 containerd[1544]: time="2026-03-06T03:03:05.599229600Z" level=info msg="Container a314f5edd92aa43cb5351628c919d4c78ca355a5bd352406750e181d48b0d7a1: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:05.613067 containerd[1544]: time="2026-03-06T03:03:05.613010847Z" level=info msg="CreateContainer within sandbox \"c52c70f158b7d83fc485ec403fbe7cf502d2bb69a13f7c376444232a3b41301f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a314f5edd92aa43cb5351628c919d4c78ca355a5bd352406750e181d48b0d7a1\"" Mar 6 03:03:05.613733 containerd[1544]: time="2026-03-06T03:03:05.613680790Z" level=info msg="StartContainer for \"a314f5edd92aa43cb5351628c919d4c78ca355a5bd352406750e181d48b0d7a1\"" Mar 6 03:03:05.615830 containerd[1544]: time="2026-03-06T03:03:05.615766203Z" level=info msg="connecting to shim a314f5edd92aa43cb5351628c919d4c78ca355a5bd352406750e181d48b0d7a1" address="unix:///run/containerd/s/5a6ee9f8abbb0b9e7130f172cfbee7bf4804a321f9fc85fd80857ea66d6bdad0" protocol=ttrpc version=3 Mar 6 03:03:05.650270 systemd[1]: Started cri-containerd-a314f5edd92aa43cb5351628c919d4c78ca355a5bd352406750e181d48b0d7a1.scope - libcontainer container a314f5edd92aa43cb5351628c919d4c78ca355a5bd352406750e181d48b0d7a1. Mar 6 03:03:05.721631 containerd[1544]: time="2026-03-06T03:03:05.721563917Z" level=info msg="StartContainer for \"a314f5edd92aa43cb5351628c919d4c78ca355a5bd352406750e181d48b0d7a1\" returns successfully" Mar 6 03:03:05.779840 containerd[1544]: time="2026-03-06T03:03:05.779779090Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 03:03:05.781936 containerd[1544]: time="2026-03-06T03:03:05.781187640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 6 03:03:05.784760 containerd[1544]: time="2026-03-06T03:03:05.784599424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 201.49183ms" Mar 6 03:03:05.784760 containerd[1544]: time="2026-03-06T03:03:05.784643170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 03:03:05.792373 containerd[1544]: time="2026-03-06T03:03:05.792323023Z" level=info msg="CreateContainer within sandbox \"c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 03:03:05.804294 containerd[1544]: time="2026-03-06T03:03:05.802675976Z" level=info msg="Container 16856ec9bda399acc059ee99f8d9a55c5090cc2a20626090249322567edc81c3: CDI devices from CRI Config.CDIDevices: []" Mar 6 03:03:05.823125 containerd[1544]: time="2026-03-06T03:03:05.823039911Z" level=info msg="CreateContainer within sandbox \"c12f8f8348303fd1ab4b985e2e29ad97e781c19953294093defa4843ba4c4eec\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"16856ec9bda399acc059ee99f8d9a55c5090cc2a20626090249322567edc81c3\"" Mar 6 03:03:05.824866 containerd[1544]: time="2026-03-06T03:03:05.824000294Z" level=info msg="StartContainer for \"16856ec9bda399acc059ee99f8d9a55c5090cc2a20626090249322567edc81c3\"" Mar 6 03:03:05.828006 containerd[1544]: time="2026-03-06T03:03:05.827960727Z" level=info msg="connecting to shim 16856ec9bda399acc059ee99f8d9a55c5090cc2a20626090249322567edc81c3" address="unix:///run/containerd/s/4a83a285fa93f48181edf7464499cc99409c32185cb1c4677c1e2192a0ca0d59" protocol=ttrpc version=3 Mar 6 03:03:05.877365 systemd[1]: Started cri-containerd-16856ec9bda399acc059ee99f8d9a55c5090cc2a20626090249322567edc81c3.scope - libcontainer container 16856ec9bda399acc059ee99f8d9a55c5090cc2a20626090249322567edc81c3. Mar 6 03:03:05.965560 containerd[1544]: time="2026-03-06T03:03:05.965511245Z" level=info msg="StartContainer for \"16856ec9bda399acc059ee99f8d9a55c5090cc2a20626090249322567edc81c3\" returns successfully" Mar 6 03:03:06.398941 kubelet[2775]: I0306 03:03:06.398317 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7765f8b4cc-c5j8p" podStartSLOduration=37.593756591 podStartE2EDuration="41.398295988s" podCreationTimestamp="2026-03-06 03:02:25 +0000 UTC" firstStartedPulling="2026-03-06 03:03:01.982107657 +0000 UTC m=+57.264296936" lastFinishedPulling="2026-03-06 03:03:05.78664704 +0000 UTC m=+61.068836333" observedRunningTime="2026-03-06 03:03:06.396689028 +0000 UTC m=+61.678878332" watchObservedRunningTime="2026-03-06 03:03:06.398295988 +0000 UTC m=+61.680485292" Mar 6 03:03:06.419694 kubelet[2775]: I0306 03:03:06.418830 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7765f8b4cc-f727z" podStartSLOduration=36.238649833 podStartE2EDuration="41.418806534s" podCreationTimestamp="2026-03-06 03:02:25 +0000 UTC" firstStartedPulling="2026-03-06 03:03:00.402617527 +0000 UTC m=+55.684806804" lastFinishedPulling="2026-03-06 03:03:05.58277421 +0000 UTC m=+60.864963505" observedRunningTime="2026-03-06 03:03:06.416466009 +0000 UTC m=+61.698655315" watchObservedRunningTime="2026-03-06 03:03:06.418806534 +0000 UTC m=+61.700995839" Mar 6 03:03:07.384036 kubelet[2775]: I0306 03:03:07.383868 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 03:03:22.550987 systemd[1]: Started sshd@7-10.128.0.102:22-20.161.92.111:33748.service - OpenSSH per-connection server daemon (20.161.92.111:33748). Mar 6 03:03:22.816349 sshd[5350]: Accepted publickey for core from 20.161.92.111 port 33748 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:03:22.819555 sshd-session[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:22.829296 systemd-logind[1518]: New session 8 of user core. Mar 6 03:03:22.835311 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 03:03:23.133787 sshd[5353]: Connection closed by 20.161.92.111 port 33748 Mar 6 03:03:23.134413 sshd-session[5350]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:23.145305 systemd[1]: sshd@7-10.128.0.102:22-20.161.92.111:33748.service: Deactivated successfully. Mar 6 03:03:23.150904 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 03:03:23.153715 systemd-logind[1518]: Session 8 logged out. Waiting for processes to exit. Mar 6 03:03:23.157025 systemd-logind[1518]: Removed session 8. Mar 6 03:03:28.181399 systemd[1]: Started sshd@8-10.128.0.102:22-20.161.92.111:33750.service - OpenSSH per-connection server daemon (20.161.92.111:33750). Mar 6 03:03:28.408453 sshd[5369]: Accepted publickey for core from 20.161.92.111 port 33750 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:03:28.410187 sshd-session[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:28.417321 systemd-logind[1518]: New session 9 of user core. Mar 6 03:03:28.423270 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 03:03:28.591979 sshd[5373]: Connection closed by 20.161.92.111 port 33750 Mar 6 03:03:28.593421 sshd-session[5369]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:28.600157 systemd-logind[1518]: Session 9 logged out. Waiting for processes to exit. Mar 6 03:03:28.600534 systemd[1]: sshd@8-10.128.0.102:22-20.161.92.111:33750.service: Deactivated successfully. Mar 6 03:03:28.603955 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 03:03:28.606570 systemd-logind[1518]: Removed session 9. Mar 6 03:03:32.843337 kubelet[2775]: I0306 03:03:32.842691 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 03:03:33.632776 systemd[1]: Started sshd@9-10.128.0.102:22-20.161.92.111:34366.service - OpenSSH per-connection server daemon (20.161.92.111:34366). Mar 6 03:03:33.847145 sshd[5419]: Accepted publickey for core from 20.161.92.111 port 34366 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:03:33.848904 sshd-session[5419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:33.856773 systemd-logind[1518]: New session 10 of user core. Mar 6 03:03:33.868283 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 03:03:34.030204 sshd[5422]: Connection closed by 20.161.92.111 port 34366 Mar 6 03:03:34.032380 sshd-session[5419]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:34.039152 systemd-logind[1518]: Session 10 logged out. Waiting for processes to exit. Mar 6 03:03:34.039681 systemd[1]: sshd@9-10.128.0.102:22-20.161.92.111:34366.service: Deactivated successfully. Mar 6 03:03:34.042762 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 03:03:34.045813 systemd-logind[1518]: Removed session 10. Mar 6 03:03:39.074521 systemd[1]: Started sshd@10-10.128.0.102:22-20.161.92.111:34378.service - OpenSSH per-connection server daemon (20.161.92.111:34378). Mar 6 03:03:39.290816 sshd[5481]: Accepted publickey for core from 20.161.92.111 port 34378 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:03:39.292917 sshd-session[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:39.300895 systemd-logind[1518]: New session 11 of user core. Mar 6 03:03:39.306286 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 03:03:39.470853 sshd[5484]: Connection closed by 20.161.92.111 port 34378 Mar 6 03:03:39.472417 sshd-session[5481]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:39.479464 systemd[1]: sshd@10-10.128.0.102:22-20.161.92.111:34378.service: Deactivated successfully. Mar 6 03:03:39.482706 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 03:03:39.484528 systemd-logind[1518]: Session 11 logged out. Waiting for processes to exit. Mar 6 03:03:39.486513 systemd-logind[1518]: Removed session 11. Mar 6 03:03:44.513941 systemd[1]: Started sshd@11-10.128.0.102:22-20.161.92.111:39394.service - OpenSSH per-connection server daemon (20.161.92.111:39394). Mar 6 03:03:44.728292 sshd[5524]: Accepted publickey for core from 20.161.92.111 port 39394 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:03:44.730308 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:44.737200 systemd-logind[1518]: New session 12 of user core. Mar 6 03:03:44.744270 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 03:03:44.906711 sshd[5527]: Connection closed by 20.161.92.111 port 39394 Mar 6 03:03:44.908177 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:44.914537 systemd[1]: sshd@11-10.128.0.102:22-20.161.92.111:39394.service: Deactivated successfully. Mar 6 03:03:44.917431 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 03:03:44.918641 systemd-logind[1518]: Session 12 logged out. Waiting for processes to exit. Mar 6 03:03:44.921026 systemd-logind[1518]: Removed session 12. Mar 6 03:03:44.950387 systemd[1]: Started sshd@12-10.128.0.102:22-20.161.92.111:39400.service - OpenSSH per-connection server daemon (20.161.92.111:39400). Mar 6 03:03:45.181307 sshd[5540]: Accepted publickey for core from 20.161.92.111 port 39400 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:03:45.183226 sshd-session[5540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:45.190524 systemd-logind[1518]: New session 13 of user core. Mar 6 03:03:45.200284 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 03:03:45.399719 sshd[5543]: Connection closed by 20.161.92.111 port 39400 Mar 6 03:03:45.400896 sshd-session[5540]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:45.412431 systemd[1]: sshd@12-10.128.0.102:22-20.161.92.111:39400.service: Deactivated successfully. Mar 6 03:03:45.417939 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 03:03:45.419883 systemd-logind[1518]: Session 13 logged out. Waiting for processes to exit. Mar 6 03:03:45.422964 systemd-logind[1518]: Removed session 13. Mar 6 03:03:45.443395 systemd[1]: Started sshd@13-10.128.0.102:22-20.161.92.111:39406.service - OpenSSH per-connection server daemon (20.161.92.111:39406). Mar 6 03:03:45.656953 sshd[5553]: Accepted publickey for core from 20.161.92.111 port 39406 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:03:45.659129 sshd-session[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:45.666483 systemd-logind[1518]: New session 14 of user core. Mar 6 03:03:45.672328 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 03:03:45.852344 sshd[5556]: Connection closed by 20.161.92.111 port 39406 Mar 6 03:03:45.853188 sshd-session[5553]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:45.860248 systemd[1]: sshd@13-10.128.0.102:22-20.161.92.111:39406.service: Deactivated successfully. Mar 6 03:03:45.863819 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 03:03:45.866350 systemd-logind[1518]: Session 14 logged out. Waiting for processes to exit. Mar 6 03:03:45.869647 systemd-logind[1518]: Removed session 14. Mar 6 03:03:50.897831 systemd[1]: Started sshd@14-10.128.0.102:22-20.161.92.111:42266.service - OpenSSH per-connection server daemon (20.161.92.111:42266). Mar 6 03:03:51.120280 sshd[5591]: Accepted publickey for core from 20.161.92.111 port 42266 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:03:51.122065 sshd-session[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:51.128490 systemd-logind[1518]: New session 15 of user core. Mar 6 03:03:51.135292 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 03:03:51.320972 sshd[5594]: Connection closed by 20.161.92.111 port 42266 Mar 6 03:03:51.322700 sshd-session[5591]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:51.329229 systemd[1]: sshd@14-10.128.0.102:22-20.161.92.111:42266.service: Deactivated successfully. Mar 6 03:03:51.332168 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 03:03:51.334155 systemd-logind[1518]: Session 15 logged out. Waiting for processes to exit. Mar 6 03:03:51.336645 systemd-logind[1518]: Removed session 15. Mar 6 03:03:51.367089 systemd[1]: Started sshd@15-10.128.0.102:22-20.161.92.111:42268.service - OpenSSH per-connection server daemon (20.161.92.111:42268). Mar 6 03:03:51.606830 sshd[5606]: Accepted publickey for core from 20.161.92.111 port 42268 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:03:51.608942 sshd-session[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:51.615688 systemd-logind[1518]: New session 16 of user core. Mar 6 03:03:51.623327 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 03:03:51.893204 sshd[5609]: Connection closed by 20.161.92.111 port 42268 Mar 6 03:03:51.894419 sshd-session[5606]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:51.900898 systemd[1]: sshd@15-10.128.0.102:22-20.161.92.111:42268.service: Deactivated successfully. Mar 6 03:03:51.904367 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 03:03:51.906158 systemd-logind[1518]: Session 16 logged out. Waiting for processes to exit. Mar 6 03:03:51.908769 systemd-logind[1518]: Removed session 16. Mar 6 03:03:51.936192 systemd[1]: Started sshd@16-10.128.0.102:22-20.161.92.111:42284.service - OpenSSH per-connection server daemon (20.161.92.111:42284). Mar 6 03:03:52.155213 sshd[5619]: Accepted publickey for core from 20.161.92.111 port 42284 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:03:52.156902 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:52.163313 systemd-logind[1518]: New session 17 of user core. Mar 6 03:03:52.175312 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 03:03:53.180360 sshd[5622]: Connection closed by 20.161.92.111 port 42284 Mar 6 03:03:53.181901 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:53.191566 systemd-logind[1518]: Session 17 logged out. Waiting for processes to exit. Mar 6 03:03:53.193054 systemd[1]: sshd@16-10.128.0.102:22-20.161.92.111:42284.service: Deactivated successfully. Mar 6 03:03:53.199927 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 03:03:53.208420 systemd-logind[1518]: Removed session 17. Mar 6 03:03:53.238423 systemd[1]: Started sshd@17-10.128.0.102:22-20.161.92.111:42300.service - OpenSSH per-connection server daemon (20.161.92.111:42300). Mar 6 03:03:53.528692 sshd[5642]: Accepted publickey for core from 20.161.92.111 port 42300 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:03:53.531340 sshd-session[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:53.542901 systemd-logind[1518]: New session 18 of user core. Mar 6 03:03:53.548333 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 03:03:54.010273 sshd[5651]: Connection closed by 20.161.92.111 port 42300 Mar 6 03:03:54.014410 sshd-session[5642]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:54.023784 systemd[1]: sshd@17-10.128.0.102:22-20.161.92.111:42300.service: Deactivated successfully. Mar 6 03:03:54.028504 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 03:03:54.031330 systemd-logind[1518]: Session 18 logged out. Waiting for processes to exit. Mar 6 03:03:54.034309 systemd-logind[1518]: Removed session 18. Mar 6 03:03:54.060444 systemd[1]: Started sshd@18-10.128.0.102:22-20.161.92.111:42310.service - OpenSSH per-connection server daemon (20.161.92.111:42310). Mar 6 03:03:54.298102 sshd[5661]: Accepted publickey for core from 20.161.92.111 port 42310 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:03:54.302033 sshd-session[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:54.314418 systemd-logind[1518]: New session 19 of user core. Mar 6 03:03:54.321864 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 03:03:54.525107 sshd[5664]: Connection closed by 20.161.92.111 port 42310 Mar 6 03:03:54.522798 sshd-session[5661]: pam_unix(sshd:session): session closed for user core Mar 6 03:03:54.531520 systemd[1]: sshd@18-10.128.0.102:22-20.161.92.111:42310.service: Deactivated successfully. Mar 6 03:03:54.537529 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 03:03:54.541301 systemd-logind[1518]: Session 19 logged out. Waiting for processes to exit. Mar 6 03:03:54.548231 systemd-logind[1518]: Removed session 19. Mar 6 03:03:59.574455 systemd[1]: Started sshd@19-10.128.0.102:22-20.161.92.111:42318.service - OpenSSH per-connection server daemon (20.161.92.111:42318). Mar 6 03:03:59.818436 sshd[5698]: Accepted publickey for core from 20.161.92.111 port 42318 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:03:59.820476 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:03:59.828000 systemd-logind[1518]: New session 20 of user core. Mar 6 03:03:59.834303 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 03:04:00.012284 sshd[5701]: Connection closed by 20.161.92.111 port 42318 Mar 6 03:04:00.013478 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:00.020963 systemd[1]: sshd@19-10.128.0.102:22-20.161.92.111:42318.service: Deactivated successfully. Mar 6 03:04:00.024342 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 03:04:00.026274 systemd-logind[1518]: Session 20 logged out. Waiting for processes to exit. Mar 6 03:04:00.028919 systemd-logind[1518]: Removed session 20. Mar 6 03:04:05.062602 systemd[1]: Started sshd@20-10.128.0.102:22-20.161.92.111:54396.service - OpenSSH per-connection server daemon (20.161.92.111:54396). Mar 6 03:04:05.319183 sshd[5738]: Accepted publickey for core from 20.161.92.111 port 54396 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:04:05.321477 sshd-session[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:05.328161 systemd-logind[1518]: New session 21 of user core. Mar 6 03:04:05.336307 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 03:04:05.555994 sshd[5742]: Connection closed by 20.161.92.111 port 54396 Mar 6 03:04:05.556589 sshd-session[5738]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:05.568280 systemd-logind[1518]: Session 21 logged out. Waiting for processes to exit. Mar 6 03:04:05.569756 systemd[1]: sshd@20-10.128.0.102:22-20.161.92.111:54396.service: Deactivated successfully. Mar 6 03:04:05.576060 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 03:04:05.579864 systemd-logind[1518]: Removed session 21. Mar 6 03:04:10.603220 systemd[1]: Started sshd@21-10.128.0.102:22-20.161.92.111:37770.service - OpenSSH per-connection server daemon (20.161.92.111:37770). Mar 6 03:04:10.849312 sshd[5789]: Accepted publickey for core from 20.161.92.111 port 37770 ssh2: RSA SHA256:ZdrmvhMusRUPQ+0Cwgv/b10pj14AUU+Sa1rq8M+qNwg Mar 6 03:04:10.850936 sshd-session[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 03:04:10.858886 systemd-logind[1518]: New session 22 of user core. Mar 6 03:04:10.862329 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 03:04:11.040005 sshd[5792]: Connection closed by 20.161.92.111 port 37770 Mar 6 03:04:11.041434 sshd-session[5789]: pam_unix(sshd:session): session closed for user core Mar 6 03:04:11.047193 systemd[1]: sshd@21-10.128.0.102:22-20.161.92.111:37770.service: Deactivated successfully. Mar 6 03:04:11.050386 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 03:04:11.052880 systemd-logind[1518]: Session 22 logged out. Waiting for processes to exit. Mar 6 03:04:11.055516 systemd-logind[1518]: Removed session 22.