Apr 16 23:46:02.125674 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 16 22:00:21 -00 2026 Apr 16 23:46:02.125722 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 16 23:46:02.125747 kernel: BIOS-provided physical RAM map: Apr 16 23:46:02.125762 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Apr 16 23:46:02.125786 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Apr 16 23:46:02.125801 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Apr 16 23:46:02.125818 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Apr 16 23:46:02.125834 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Apr 16 23:46:02.125849 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd2f5fff] usable Apr 16 23:46:02.125868 kernel: BIOS-e820: [mem 0x00000000bd2f6000-0x00000000bd2fffff] ACPI data Apr 16 23:46:02.125883 kernel: BIOS-e820: [mem 0x00000000bd300000-0x00000000bf8ecfff] usable Apr 16 23:46:02.125898 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Apr 16 23:46:02.125912 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Apr 16 23:46:02.125928 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Apr 16 23:46:02.125947 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Apr 16 23:46:02.125967 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Apr 16 23:46:02.125984 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Apr 16 23:46:02.126001 kernel: NX (Execute Disable) protection: active Apr 16 23:46:02.126017 kernel: APIC: Static calls initialized Apr 16 23:46:02.126033 kernel: efi: EFI v2.7 by EDK II Apr 16 23:46:02.126051 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd300018 RNG=0xbfb73018 TPMEventLog=0xbd2f6018 Apr 16 23:46:02.126067 kernel: random: crng init done Apr 16 23:46:02.126084 kernel: secureboot: Secure boot disabled Apr 16 23:46:02.126100 kernel: SMBIOS 2.4 present. Apr 16 23:46:02.126117 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026 Apr 16 23:46:02.126137 kernel: DMI: Memory slots populated: 1/1 Apr 16 23:46:02.126153 kernel: Hypervisor detected: KVM Apr 16 23:46:02.126169 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 16 23:46:02.126186 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 23:46:02.126202 kernel: kvm-clock: using sched offset of 15252615214 cycles Apr 16 23:46:02.126220 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 23:46:02.126237 kernel: tsc: Detected 2299.998 MHz processor Apr 16 23:46:02.126254 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 23:46:02.126272 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 23:46:02.126289 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Apr 16 23:46:02.126309 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Apr 16 23:46:02.126327 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 23:46:02.126344 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 16 23:46:02.126361 kernel: Using GB pages for direct mapping Apr 16 23:46:02.126378 kernel: ACPI: Early table checksum verification disabled Apr 16 23:46:02.126402 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Apr 16 23:46:02.126451 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Apr 16 23:46:02.126474 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Apr 16 23:46:02.126492 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Apr 16 23:46:02.126510 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Apr 16 23:46:02.126527 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250807) Apr 16 23:46:02.126545 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Apr 16 23:46:02.126563 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Apr 16 23:46:02.126581 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Apr 16 23:46:02.126603 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Apr 16 23:46:02.126621 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Apr 16 23:46:02.126639 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Apr 16 23:46:02.126658 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Apr 16 23:46:02.126676 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Apr 16 23:46:02.126694 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Apr 16 23:46:02.126712 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Apr 16 23:46:02.126730 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Apr 16 23:46:02.126748 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Apr 16 23:46:02.126776 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Apr 16 23:46:02.126795 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Apr 16 23:46:02.126812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 16 23:46:02.126831 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Apr 16 23:46:02.126849 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Apr 16 23:46:02.126867 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Apr 16 23:46:02.126885 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Apr 16 23:46:02.126903 kernel: NODE_DATA(0) allocated [mem 0x21fff6dc0-0x21fffdfff] Apr 16 23:46:02.126921 kernel: Zone ranges: Apr 16 23:46:02.126942 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 23:46:02.126960 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 16 23:46:02.126978 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Apr 16 23:46:02.126996 kernel: Device empty Apr 16 23:46:02.127014 kernel: Movable zone start for each node Apr 16 23:46:02.127032 kernel: Early memory node ranges Apr 16 23:46:02.127050 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Apr 16 23:46:02.127068 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Apr 16 23:46:02.127086 kernel: node 0: [mem 0x0000000000100000-0x00000000bd2f5fff] Apr 16 23:46:02.127107 kernel: node 0: [mem 0x00000000bd300000-0x00000000bf8ecfff] Apr 16 23:46:02.127125 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Apr 16 23:46:02.127141 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Apr 16 23:46:02.127159 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Apr 16 23:46:02.127194 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 23:46:02.127208 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Apr 16 23:46:02.127224 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Apr 16 23:46:02.127241 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Apr 16 23:46:02.127259 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 16 23:46:02.127281 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Apr 16 23:46:02.127295 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 16 23:46:02.127311 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 23:46:02.127329 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 23:46:02.127347 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 23:46:02.127362 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 23:46:02.127377 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 23:46:02.127394 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 23:46:02.129439 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 23:46:02.129477 kernel: CPU topo: Max. logical packages: 1 Apr 16 23:46:02.129495 kernel: CPU topo: Max. logical dies: 1 Apr 16 23:46:02.129512 kernel: CPU topo: Max. dies per package: 1 Apr 16 23:46:02.129528 kernel: CPU topo: Max. threads per core: 2 Apr 16 23:46:02.129545 kernel: CPU topo: Num. cores per package: 1 Apr 16 23:46:02.129562 kernel: CPU topo: Num. threads per package: 2 Apr 16 23:46:02.129579 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Apr 16 23:46:02.129595 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 16 23:46:02.129612 kernel: Booting paravirtualized kernel on KVM Apr 16 23:46:02.129629 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 23:46:02.129650 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 16 23:46:02.129667 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u1048576 Apr 16 23:46:02.129683 kernel: pcpu-alloc: s207448 r8192 d30120 u1048576 alloc=1*2097152 Apr 16 23:46:02.129700 kernel: pcpu-alloc: [0] 0 1 Apr 16 23:46:02.129716 kernel: kvm-guest: PV spinlocks enabled Apr 16 23:46:02.129734 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 23:46:02.129752 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 16 23:46:02.129778 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 16 23:46:02.129799 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 23:46:02.129815 kernel: Fallback order for Node 0: 0 Apr 16 23:46:02.129832 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Apr 16 23:46:02.129848 kernel: Policy zone: Normal Apr 16 23:46:02.129864 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 23:46:02.129881 kernel: software IO TLB: area num 2. Apr 16 23:46:02.129912 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 16 23:46:02.129934 kernel: Kernel/User page tables isolation: enabled Apr 16 23:46:02.129951 kernel: ftrace: allocating 40126 entries in 157 pages Apr 16 23:46:02.129969 kernel: ftrace: allocated 157 pages with 5 groups Apr 16 23:46:02.129986 kernel: Dynamic Preempt: voluntary Apr 16 23:46:02.130003 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 23:46:02.130026 kernel: rcu: RCU event tracing is enabled. Apr 16 23:46:02.130044 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 16 23:46:02.130061 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 23:46:02.130079 kernel: Rude variant of Tasks RCU enabled. Apr 16 23:46:02.130097 kernel: Tracing variant of Tasks RCU enabled. Apr 16 23:46:02.130118 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 23:46:02.130136 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 16 23:46:02.130154 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:46:02.130171 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:46:02.130189 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:46:02.130206 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 16 23:46:02.130223 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 23:46:02.130241 kernel: Console: colour dummy device 80x25 Apr 16 23:46:02.130262 kernel: printk: legacy console [ttyS0] enabled Apr 16 23:46:02.130291 kernel: ACPI: Core revision 20240827 Apr 16 23:46:02.130307 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 23:46:02.130323 kernel: x2apic enabled Apr 16 23:46:02.130340 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 23:46:02.130356 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Apr 16 23:46:02.130374 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 16 23:46:02.130392 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Apr 16 23:46:02.130431 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Apr 16 23:46:02.130456 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Apr 16 23:46:02.130474 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 23:46:02.130493 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Apr 16 23:46:02.130511 kernel: Spectre V2 : Mitigation: IBRS Apr 16 23:46:02.130529 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 23:46:02.130547 kernel: RETBleed: Mitigation: IBRS Apr 16 23:46:02.130566 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 16 23:46:02.130584 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Apr 16 23:46:02.130603 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 16 23:46:02.130627 kernel: MDS: Mitigation: Clear CPU buffers Apr 16 23:46:02.130646 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 23:46:02.130664 kernel: active return thunk: its_return_thunk Apr 16 23:46:02.130683 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 23:46:02.130701 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 23:46:02.130719 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 23:46:02.130737 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 23:46:02.130755 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 23:46:02.130782 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 16 23:46:02.130806 kernel: Freeing SMP alternatives memory: 32K Apr 16 23:46:02.130824 kernel: pid_max: default: 32768 minimum: 301 Apr 16 23:46:02.130843 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 23:46:02.130861 kernel: landlock: Up and running. Apr 16 23:46:02.130880 kernel: SELinux: Initializing. Apr 16 23:46:02.130899 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 16 23:46:02.130918 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 16 23:46:02.130936 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Apr 16 23:46:02.130955 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Apr 16 23:46:02.130978 kernel: signal: max sigframe size: 1776 Apr 16 23:46:02.130996 kernel: rcu: Hierarchical SRCU implementation. Apr 16 23:46:02.131016 kernel: rcu: Max phase no-delay instances is 400. Apr 16 23:46:02.131035 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 23:46:02.131053 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 23:46:02.131072 kernel: smp: Bringing up secondary CPUs ... Apr 16 23:46:02.131090 kernel: smpboot: x86: Booting SMP configuration: Apr 16 23:46:02.131108 kernel: .... node #0, CPUs: #1 Apr 16 23:46:02.131126 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 16 23:46:02.131151 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 16 23:46:02.131170 kernel: smp: Brought up 1 node, 2 CPUs Apr 16 23:46:02.131188 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Apr 16 23:46:02.131206 kernel: Memory: 7499696K/7860544K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46216K init, 2532K bss, 355264K reserved, 0K cma-reserved) Apr 16 23:46:02.131224 kernel: devtmpfs: initialized Apr 16 23:46:02.131242 kernel: x86/mm: Memory block size: 128MB Apr 16 23:46:02.131261 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Apr 16 23:46:02.131278 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 23:46:02.131301 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 16 23:46:02.131319 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 23:46:02.131338 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 23:46:02.131356 kernel: audit: initializing netlink subsys (disabled) Apr 16 23:46:02.131373 kernel: audit: type=2000 audit(1776383158.175:1): state=initialized audit_enabled=0 res=1 Apr 16 23:46:02.131391 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 23:46:02.132284 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 23:46:02.132317 kernel: cpuidle: using governor menu Apr 16 23:46:02.132336 kernel: efi: Freeing EFI boot services memory: 56356K Apr 16 23:46:02.132360 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 23:46:02.132378 kernel: dca service started, version 1.12.1 Apr 16 23:46:02.132396 kernel: PCI: Using configuration type 1 for base access Apr 16 23:46:02.133454 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 23:46:02.133478 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 23:46:02.133497 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 23:46:02.133514 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 23:46:02.133531 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 23:46:02.133548 kernel: ACPI: Added _OSI(Module Device) Apr 16 23:46:02.133573 kernel: ACPI: Added _OSI(Processor Device) Apr 16 23:46:02.133592 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 23:46:02.133611 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 16 23:46:02.133630 kernel: ACPI: Interpreter enabled Apr 16 23:46:02.133649 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 23:46:02.133667 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 23:46:02.133686 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 23:46:02.133705 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 16 23:46:02.133724 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 16 23:46:02.133747 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 23:46:02.134021 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 16 23:46:02.134215 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 16 23:46:02.134389 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 16 23:46:02.134428 kernel: PCI host bridge to bus 0000:00 Apr 16 23:46:02.134617 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 23:46:02.134801 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 23:46:02.134969 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 23:46:02.135127 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Apr 16 23:46:02.135286 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 23:46:02.137291 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Apr 16 23:46:02.137549 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Apr 16 23:46:02.137763 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Apr 16 23:46:02.137961 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 16 23:46:02.138152 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Apr 16 23:46:02.138325 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Apr 16 23:46:02.140516 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Apr 16 23:46:02.140836 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 16 23:46:02.141033 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Apr 16 23:46:02.141227 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Apr 16 23:46:02.141437 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 16 23:46:02.141645 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Apr 16 23:46:02.141838 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Apr 16 23:46:02.141864 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 23:46:02.141883 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 23:46:02.141900 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 23:46:02.141917 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 23:46:02.141944 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 16 23:46:02.141963 kernel: iommu: Default domain type: Translated Apr 16 23:46:02.141980 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 23:46:02.141998 kernel: efivars: Registered efivars operations Apr 16 23:46:02.142017 kernel: PCI: Using ACPI for IRQ routing Apr 16 23:46:02.142036 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 23:46:02.142054 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Apr 16 23:46:02.142073 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Apr 16 23:46:02.142091 kernel: e820: reserve RAM buffer [mem 0xbd2f6000-0xbfffffff] Apr 16 23:46:02.142115 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Apr 16 23:46:02.142134 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Apr 16 23:46:02.142153 kernel: vgaarb: loaded Apr 16 23:46:02.142172 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 23:46:02.142192 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 23:46:02.142210 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 23:46:02.142229 kernel: pnp: PnP ACPI init Apr 16 23:46:02.142248 kernel: pnp: PnP ACPI: found 7 devices Apr 16 23:46:02.142267 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 23:46:02.142291 kernel: NET: Registered PF_INET protocol family Apr 16 23:46:02.142310 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 16 23:46:02.142329 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 16 23:46:02.142348 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 23:46:02.142368 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 23:46:02.142387 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 16 23:46:02.142407 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 16 23:46:02.144495 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 16 23:46:02.144517 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 16 23:46:02.144555 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 23:46:02.144574 kernel: NET: Registered PF_XDP protocol family Apr 16 23:46:02.144776 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 23:46:02.144948 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 23:46:02.145113 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 23:46:02.145278 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Apr 16 23:46:02.145487 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 16 23:46:02.145527 kernel: PCI: CLS 0 bytes, default 64 Apr 16 23:46:02.145547 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 16 23:46:02.145566 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Apr 16 23:46:02.145586 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 23:46:02.145605 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 16 23:46:02.145625 kernel: clocksource: Switched to clocksource tsc Apr 16 23:46:02.145643 kernel: Initialise system trusted keyrings Apr 16 23:46:02.145663 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 16 23:46:02.145682 kernel: Key type asymmetric registered Apr 16 23:46:02.145704 kernel: Asymmetric key parser 'x509' registered Apr 16 23:46:02.145723 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 16 23:46:02.145741 kernel: io scheduler mq-deadline registered Apr 16 23:46:02.145760 kernel: io scheduler kyber registered Apr 16 23:46:02.145778 kernel: io scheduler bfq registered Apr 16 23:46:02.145797 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 23:46:02.145817 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 16 23:46:02.146005 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Apr 16 23:46:02.146030 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 16 23:46:02.146222 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Apr 16 23:46:02.146246 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 16 23:46:02.148435 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Apr 16 23:46:02.148467 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 23:46:02.148486 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 23:46:02.148505 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 16 23:46:02.148533 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Apr 16 23:46:02.148552 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Apr 16 23:46:02.148779 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Apr 16 23:46:02.148811 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 23:46:02.148830 kernel: i8042: Warning: Keylock active Apr 16 23:46:02.148848 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 23:46:02.148867 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 23:46:02.149068 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 16 23:46:02.149245 kernel: rtc_cmos 00:00: registered as rtc0 Apr 16 23:46:02.149764 kernel: rtc_cmos 00:00: setting system clock to 2026-04-16T23:46:01 UTC (1776383161) Apr 16 23:46:02.149969 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 16 23:46:02.149993 kernel: intel_pstate: CPU model not supported Apr 16 23:46:02.150013 kernel: pstore: Using crash dump compression: deflate Apr 16 23:46:02.150032 kernel: pstore: Registered efi_pstore as persistent store backend Apr 16 23:46:02.150051 kernel: NET: Registered PF_INET6 protocol family Apr 16 23:46:02.150070 kernel: Segment Routing with IPv6 Apr 16 23:46:02.150088 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 23:46:02.150107 kernel: NET: Registered PF_PACKET protocol family Apr 16 23:46:02.150126 kernel: Key type dns_resolver registered Apr 16 23:46:02.150149 kernel: IPI shorthand broadcast: enabled Apr 16 23:46:02.150167 kernel: sched_clock: Marking stable (3902004967, 163372598)->(4118232507, -52854942) Apr 16 23:46:02.150186 kernel: registered taskstats version 1 Apr 16 23:46:02.150204 kernel: Loading compiled-in X.509 certificates Apr 16 23:46:02.150223 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 92f69eed5a22c94634d5240e5e65306547d4ba83' Apr 16 23:46:02.150241 kernel: Demotion targets for Node 0: null Apr 16 23:46:02.150259 kernel: Key type .fscrypt registered Apr 16 23:46:02.150277 kernel: Key type fscrypt-provisioning registered Apr 16 23:46:02.150295 kernel: ima: Allocated hash algorithm: sha1 Apr 16 23:46:02.150318 kernel: ima: No architecture policies found Apr 16 23:46:02.150337 kernel: clk: Disabling unused clocks Apr 16 23:46:02.150354 kernel: Warning: unable to open an initial console. Apr 16 23:46:02.150374 kernel: Freeing unused kernel image (initmem) memory: 46216K Apr 16 23:46:02.150393 kernel: Write protecting the kernel read-only data: 40960k Apr 16 23:46:02.150426 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 16 23:46:02.151360 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 23:46:02.151381 kernel: Run /init as init process Apr 16 23:46:02.151401 kernel: with arguments: Apr 16 23:46:02.151809 kernel: /init Apr 16 23:46:02.151829 kernel: with environment: Apr 16 23:46:02.151848 kernel: HOME=/ Apr 16 23:46:02.151867 kernel: TERM=linux Apr 16 23:46:02.151888 systemd[1]: Successfully made /usr/ read-only. Apr 16 23:46:02.151912 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:46:02.151933 systemd[1]: Detected virtualization google. Apr 16 23:46:02.151958 systemd[1]: Detected architecture x86-64. Apr 16 23:46:02.151977 systemd[1]: Running in initrd. Apr 16 23:46:02.151996 systemd[1]: No hostname configured, using default hostname. Apr 16 23:46:02.152017 systemd[1]: Hostname set to . Apr 16 23:46:02.152036 systemd[1]: Initializing machine ID from random generator. Apr 16 23:46:02.152057 systemd[1]: Queued start job for default target initrd.target. Apr 16 23:46:02.152095 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:46:02.152119 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:46:02.152141 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 23:46:02.152162 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:46:02.152183 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 23:46:02.152205 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 23:46:02.152227 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 23:46:02.152252 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 23:46:02.152273 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:46:02.152294 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:46:02.152314 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:46:02.152334 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:46:02.152355 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:46:02.152376 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:46:02.152396 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:46:02.152478 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:46:02.152498 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 23:46:02.152527 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 23:46:02.152548 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:46:02.152570 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:46:02.152589 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:46:02.152612 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:46:02.152631 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 23:46:02.152650 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:46:02.152675 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 23:46:02.152700 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 23:46:02.152719 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 23:46:02.152739 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:46:02.152762 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:46:02.152783 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:46:02.152803 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 23:46:02.152827 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:46:02.152847 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 23:46:02.154511 systemd-journald[191]: Collecting audit messages is disabled. Apr 16 23:46:02.154577 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 23:46:02.154600 systemd-journald[191]: Journal started Apr 16 23:46:02.154646 systemd-journald[191]: Runtime Journal (/run/log/journal/c73a954be616496c99dda4dd8992fb47) is 8M, max 148.6M, 140.6M free. Apr 16 23:46:02.132059 systemd-modules-load[193]: Inserted module 'overlay' Apr 16 23:46:02.158629 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:46:02.167613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:46:02.173829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:46:02.180869 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 23:46:02.191240 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 23:46:02.191276 kernel: Bridge firewalling registered Apr 16 23:46:02.190567 systemd-modules-load[193]: Inserted module 'br_netfilter' Apr 16 23:46:02.193043 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 23:46:02.199483 systemd-tmpfiles[207]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 23:46:02.201812 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:46:02.210770 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:46:02.211714 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:46:02.224644 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:46:02.230494 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:46:02.245321 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:46:02.252742 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:46:02.255112 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 23:46:02.268593 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:46:02.289477 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 16 23:46:02.341022 systemd-resolved[231]: Positive Trust Anchors: Apr 16 23:46:02.341577 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:46:02.341653 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:46:02.351489 systemd-resolved[231]: Defaulting to hostname 'linux'. Apr 16 23:46:02.355396 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:46:02.360644 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:46:02.414459 kernel: SCSI subsystem initialized Apr 16 23:46:02.426436 kernel: Loading iSCSI transport class v2.0-870. Apr 16 23:46:02.438449 kernel: iscsi: registered transport (tcp) Apr 16 23:46:02.463767 kernel: iscsi: registered transport (qla4xxx) Apr 16 23:46:02.463850 kernel: QLogic iSCSI HBA Driver Apr 16 23:46:02.487377 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:46:02.506515 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:46:02.513324 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:46:02.575479 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 23:46:02.577697 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 23:46:02.636449 kernel: raid6: avx2x4 gen() 18215 MB/s Apr 16 23:46:02.653440 kernel: raid6: avx2x2 gen() 18218 MB/s Apr 16 23:46:02.670814 kernel: raid6: avx2x1 gen() 14230 MB/s Apr 16 23:46:02.670860 kernel: raid6: using algorithm avx2x2 gen() 18218 MB/s Apr 16 23:46:02.688900 kernel: raid6: .... xor() 18640 MB/s, rmw enabled Apr 16 23:46:02.688965 kernel: raid6: using avx2x2 recovery algorithm Apr 16 23:46:02.711451 kernel: xor: automatically using best checksumming function avx Apr 16 23:46:02.894455 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 23:46:02.902997 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:46:02.906572 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:46:02.938021 systemd-udevd[440]: Using default interface naming scheme 'v255'. Apr 16 23:46:02.946880 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:46:02.953459 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 23:46:02.986291 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Apr 16 23:46:03.018468 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:46:03.020469 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:46:03.114390 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:46:03.122184 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 23:46:03.229118 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Apr 16 23:46:03.237474 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 23:46:03.259439 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 16 23:46:03.270437 kernel: AES CTR mode by8 optimization enabled Apr 16 23:46:03.286888 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:46:03.289611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:46:03.297862 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:46:03.304577 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:46:03.307051 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:46:03.316039 kernel: scsi host0: Virtio SCSI HBA Apr 16 23:46:03.316126 kernel: blk-mq: reduced tag depth to 10240 Apr 16 23:46:03.324464 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Apr 16 23:46:03.375766 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Apr 16 23:46:03.376089 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Apr 16 23:46:03.376307 kernel: sd 0:0:1:0: [sda] Write Protect is off Apr 16 23:46:03.377445 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Apr 16 23:46:03.379601 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 16 23:46:03.387745 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:46:03.393805 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 23:46:03.393841 kernel: GPT:17805311 != 33554431 Apr 16 23:46:03.393865 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 23:46:03.393889 kernel: GPT:17805311 != 33554431 Apr 16 23:46:03.393913 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 23:46:03.393937 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:46:03.395464 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Apr 16 23:46:03.476061 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Apr 16 23:46:03.480939 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 23:46:03.508001 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Apr 16 23:46:03.526842 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Apr 16 23:46:03.527128 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Apr 16 23:46:03.546589 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 16 23:46:03.546902 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:46:03.551697 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:46:03.559499 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:46:03.565014 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 23:46:03.571796 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 23:46:03.591499 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:46:03.605336 disk-uuid[592]: Primary Header is updated. Apr 16 23:46:03.605336 disk-uuid[592]: Secondary Entries is updated. Apr 16 23:46:03.605336 disk-uuid[592]: Secondary Header is updated. Apr 16 23:46:03.613833 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:46:04.642488 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:46:04.643326 disk-uuid[600]: The operation has completed successfully. Apr 16 23:46:04.721958 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 23:46:04.722128 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 23:46:04.766428 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 23:46:04.785609 sh[614]: Success Apr 16 23:46:04.808449 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 23:46:04.808546 kernel: device-mapper: uevent: version 1.0.3 Apr 16 23:46:04.808576 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 23:46:04.821436 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Apr 16 23:46:04.895621 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 23:46:04.904546 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 23:46:04.921988 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 23:46:04.941679 kernel: BTRFS: device fsid d1542dca-1171-4bcf-9aae-d85dd05fe503 devid 1 transid 32 /dev/mapper/usr (254:0) scanned by mount (626) Apr 16 23:46:04.941742 kernel: BTRFS info (device dm-0): first mount of filesystem d1542dca-1171-4bcf-9aae-d85dd05fe503 Apr 16 23:46:04.941768 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:46:04.969663 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 16 23:46:04.969748 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 23:46:04.969786 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 23:46:04.973999 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 23:46:04.977749 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:46:04.980841 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 23:46:04.983312 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 23:46:04.988502 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 23:46:05.033461 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (659) Apr 16 23:46:05.036028 kernel: BTRFS info (device sda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:46:05.036090 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:46:05.043586 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 23:46:05.043655 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:46:05.043680 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:46:05.050595 kernel: BTRFS info (device sda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:46:05.051828 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 23:46:05.056718 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 23:46:05.182204 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:46:05.191863 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:46:05.307930 ignition[714]: Ignition 2.22.0 Apr 16 23:46:05.308374 ignition[714]: Stage: fetch-offline Apr 16 23:46:05.310654 systemd-networkd[795]: lo: Link UP Apr 16 23:46:05.308458 ignition[714]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:46:05.310661 systemd-networkd[795]: lo: Gained carrier Apr 16 23:46:05.308474 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 16 23:46:05.311353 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:46:05.308656 ignition[714]: parsed url from cmdline: "" Apr 16 23:46:05.313359 systemd-networkd[795]: Enumeration completed Apr 16 23:46:05.308663 ignition[714]: no config URL provided Apr 16 23:46:05.314868 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:46:05.308673 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:46:05.314875 systemd-networkd[795]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:46:05.308687 ignition[714]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:46:05.316274 systemd-networkd[795]: eth0: Link UP Apr 16 23:46:05.308698 ignition[714]: failed to fetch config: resource requires networking Apr 16 23:46:05.316618 systemd-networkd[795]: eth0: Gained carrier Apr 16 23:46:05.308966 ignition[714]: Ignition finished successfully Apr 16 23:46:05.316635 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:46:05.317669 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:46:05.370800 ignition[804]: Ignition 2.22.0 Apr 16 23:46:05.321224 systemd[1]: Reached target network.target - Network. Apr 16 23:46:05.370808 ignition[804]: Stage: fetch Apr 16 23:46:05.322818 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 16 23:46:05.371053 ignition[804]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:46:05.329508 systemd-networkd[795]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a.c.flatcar-212911.internal' to 'ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a' Apr 16 23:46:05.371071 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 16 23:46:05.329526 systemd-networkd[795]: eth0: DHCPv4 address 10.128.0.50/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 16 23:46:05.371225 ignition[804]: parsed url from cmdline: "" Apr 16 23:46:05.386744 unknown[804]: fetched base config from "system" Apr 16 23:46:05.371232 ignition[804]: no config URL provided Apr 16 23:46:05.386754 unknown[804]: fetched base config from "system" Apr 16 23:46:05.371242 ignition[804]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:46:05.386763 unknown[804]: fetched user config from "gcp" Apr 16 23:46:05.371256 ignition[804]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:46:05.390398 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 16 23:46:05.371306 ignition[804]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Apr 16 23:46:05.397075 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 23:46:05.375329 ignition[804]: GET result: OK Apr 16 23:46:05.375504 ignition[804]: parsing config with SHA512: 353b80298559de5a9e1ff3dfbfa759bae95462c71f00e4f6685a55da10c05f91822609f081225ab7676cf5799e6cf9368842c18b35af0017a5f54ad89eaf044d Apr 16 23:46:05.387357 ignition[804]: fetch: fetch complete Apr 16 23:46:05.387367 ignition[804]: fetch: fetch passed Apr 16 23:46:05.387475 ignition[804]: Ignition finished successfully Apr 16 23:46:05.445817 ignition[811]: Ignition 2.22.0 Apr 16 23:46:05.445834 ignition[811]: Stage: kargs Apr 16 23:46:05.448864 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 23:46:05.446066 ignition[811]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:46:05.446083 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 16 23:46:05.456471 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 23:46:05.447172 ignition[811]: kargs: kargs passed Apr 16 23:46:05.447227 ignition[811]: Ignition finished successfully Apr 16 23:46:05.499748 ignition[818]: Ignition 2.22.0 Apr 16 23:46:05.499765 ignition[818]: Stage: disks Apr 16 23:46:05.503267 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 23:46:05.499980 ignition[818]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:46:05.508875 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 23:46:05.499997 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 16 23:46:05.515549 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 23:46:05.501186 ignition[818]: disks: disks passed Apr 16 23:46:05.519547 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:46:05.501245 ignition[818]: Ignition finished successfully Apr 16 23:46:05.523542 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:46:05.527512 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:46:05.532831 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 23:46:05.569917 systemd-fsck[827]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Apr 16 23:46:05.578691 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 23:46:05.581851 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 23:46:05.758460 kernel: EXT4-fs (sda9): mounted filesystem ee420a69-62b9-42f4-84c7-ea3f2d87c569 r/w with ordered data mode. Quota mode: none. Apr 16 23:46:05.759134 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 23:46:05.763585 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 23:46:05.772313 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:46:05.777874 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 23:46:05.784721 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 23:46:05.784942 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 23:46:05.784993 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:46:05.800535 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (835) Apr 16 23:46:05.803987 kernel: BTRFS info (device sda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:46:05.804034 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:46:05.804261 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 23:46:05.807600 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 23:46:05.814733 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 23:46:05.814771 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:46:05.814796 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:46:05.818237 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:46:05.925085 initrd-setup-root[859]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 23:46:05.934966 initrd-setup-root[866]: cut: /sysroot/etc/group: No such file or directory Apr 16 23:46:05.942998 initrd-setup-root[873]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 23:46:05.948432 initrd-setup-root[880]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 23:46:06.088654 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 23:46:06.097029 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 23:46:06.101695 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 23:46:06.125776 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 23:46:06.127619 kernel: BTRFS info (device sda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:46:06.157570 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 23:46:06.169638 ignition[947]: INFO : Ignition 2.22.0 Apr 16 23:46:06.169638 ignition[947]: INFO : Stage: mount Apr 16 23:46:06.176499 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:46:06.176499 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 16 23:46:06.176499 ignition[947]: INFO : mount: mount passed Apr 16 23:46:06.176499 ignition[947]: INFO : Ignition finished successfully Apr 16 23:46:06.172376 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 23:46:06.177325 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 23:46:06.206132 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:46:06.234442 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (960) Apr 16 23:46:06.236954 kernel: BTRFS info (device sda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:46:06.237007 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:46:06.245227 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 23:46:06.245285 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:46:06.245309 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:46:06.247790 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:46:06.285550 ignition[977]: INFO : Ignition 2.22.0 Apr 16 23:46:06.285550 ignition[977]: INFO : Stage: files Apr 16 23:46:06.291526 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:46:06.291526 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 16 23:46:06.291526 ignition[977]: DEBUG : files: compiled without relabeling support, skipping Apr 16 23:46:06.291526 ignition[977]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 23:46:06.291526 ignition[977]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 23:46:06.309500 ignition[977]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 23:46:06.309500 ignition[977]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 23:46:06.309500 ignition[977]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 23:46:06.309500 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 23:46:06.309500 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 23:46:06.293330 unknown[977]: wrote ssh authorized keys file for user: core Apr 16 23:46:06.820641 systemd-networkd[795]: eth0: Gained IPv6LL Apr 16 23:46:09.528596 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 23:46:09.679330 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 23:46:09.682701 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 23:46:09.682701 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 23:46:09.682701 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:46:09.682701 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:46:09.682701 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:46:09.682701 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:46:09.682701 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:46:09.682701 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:46:09.682701 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:46:09.682701 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:46:09.682701 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 23:46:09.682701 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 23:46:09.682701 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 23:46:09.682701 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 16 23:46:10.149066 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 23:46:11.000601 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 23:46:11.000601 ignition[977]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 23:46:11.007467 ignition[977]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:46:11.007467 ignition[977]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:46:11.007467 ignition[977]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 23:46:11.007467 ignition[977]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 16 23:46:11.007467 ignition[977]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 23:46:11.007467 ignition[977]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:46:11.007467 ignition[977]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:46:11.007467 ignition[977]: INFO : files: files passed Apr 16 23:46:11.007467 ignition[977]: INFO : Ignition finished successfully Apr 16 23:46:11.007449 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 23:46:11.010439 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 23:46:11.025584 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 23:46:11.033000 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 23:46:11.033143 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 23:46:11.058781 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:46:11.062550 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:46:11.061843 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:46:11.071579 initrd-setup-root-after-ignition[1007]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:46:11.064205 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 23:46:11.070003 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 23:46:11.128825 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 23:46:11.128961 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 23:46:11.133453 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 23:46:11.135740 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 23:46:11.139860 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 23:46:11.142032 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 23:46:11.176117 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:46:11.178262 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 23:46:11.205528 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:46:11.208671 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:46:11.213750 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 23:46:11.217068 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 23:46:11.217552 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:46:11.225929 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 23:46:11.228945 systemd[1]: Stopped target basic.target - Basic System. Apr 16 23:46:11.232922 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 23:46:11.236840 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:46:11.240926 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 23:46:11.244925 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:46:11.248918 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 23:46:11.252919 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:46:11.256948 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 23:46:11.261937 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 23:46:11.265928 systemd[1]: Stopped target swap.target - Swaps. Apr 16 23:46:11.269874 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 23:46:11.270297 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:46:11.276788 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:46:11.279969 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:46:11.283870 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 23:46:11.284163 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:46:11.287890 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 23:46:11.288314 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 23:46:11.301578 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 23:46:11.302050 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:46:11.304957 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 23:46:11.305407 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 23:46:11.310640 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 23:46:11.316757 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 23:46:11.316971 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:46:11.331894 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 23:46:11.334945 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 23:46:11.335741 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:46:11.348872 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 23:46:11.349539 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:46:11.355597 ignition[1031]: INFO : Ignition 2.22.0 Apr 16 23:46:11.355597 ignition[1031]: INFO : Stage: umount Apr 16 23:46:11.355597 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:46:11.355597 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 16 23:46:11.366706 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 23:46:11.371654 ignition[1031]: INFO : umount: umount passed Apr 16 23:46:11.371654 ignition[1031]: INFO : Ignition finished successfully Apr 16 23:46:11.366880 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 23:46:11.376581 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 23:46:11.380015 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 23:46:11.380292 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 23:46:11.384718 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 23:46:11.384889 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 23:46:11.387779 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 16 23:46:11.387967 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 16 23:46:11.392567 systemd[1]: Stopped target network.target - Network. Apr 16 23:46:11.396568 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 23:46:11.396656 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:46:11.400588 systemd[1]: Stopped target paths.target - Path Units. Apr 16 23:46:11.404558 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 23:46:11.408494 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:46:11.408744 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 23:46:11.415525 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 23:46:11.419604 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 23:46:11.419682 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:46:11.423576 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 23:46:11.423649 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:46:11.427579 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 23:46:11.427674 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 23:46:11.433607 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 23:46:11.433686 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 23:46:11.437938 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 23:46:11.440982 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 23:46:11.448775 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 23:46:11.449022 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 23:46:11.455525 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 23:46:11.455801 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 23:46:11.455931 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 23:46:11.460247 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 23:46:11.460585 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 23:46:11.460689 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 23:46:11.464934 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 23:46:11.465075 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 23:46:11.471836 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 23:46:11.474772 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 23:46:11.474948 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:46:11.478747 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 23:46:11.478922 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 23:46:11.484810 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 23:46:11.489762 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 23:46:11.489842 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:46:11.499620 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 23:46:11.499694 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:46:11.502966 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 23:46:11.503024 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 23:46:11.505774 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 23:46:11.505957 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:46:11.511127 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:46:11.519123 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 23:46:11.519194 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:46:11.519671 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 23:46:11.519832 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:46:11.526052 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 23:46:11.526491 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 23:46:11.534581 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 23:46:11.534639 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:46:11.538531 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 23:46:11.538605 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:46:11.544510 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 23:46:11.544591 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 23:46:11.555547 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 23:46:11.555634 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:46:11.562778 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 23:46:11.573517 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 23:46:11.573623 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:46:11.576948 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 23:46:11.577171 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:46:11.586783 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:46:11.586864 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:46:11.594473 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 16 23:46:11.594548 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 16 23:46:11.677543 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Apr 16 23:46:11.594596 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:46:11.595043 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 23:46:11.595154 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 23:46:11.599788 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 23:46:11.599919 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 23:46:11.606554 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 23:46:11.610114 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 23:46:11.636779 systemd[1]: Switching root. Apr 16 23:46:11.702558 systemd-journald[191]: Journal stopped Apr 16 23:46:13.685797 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 23:46:13.685844 kernel: SELinux: policy capability open_perms=1 Apr 16 23:46:13.685871 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 23:46:13.685889 kernel: SELinux: policy capability always_check_network=0 Apr 16 23:46:13.685906 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 23:46:13.685923 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 23:46:13.685944 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 23:46:13.685961 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 23:46:13.685983 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 23:46:13.686001 kernel: audit: type=1403 audit(1776383172.249:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 23:46:13.686022 systemd[1]: Successfully loaded SELinux policy in 58.727ms. Apr 16 23:46:13.686043 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.158ms. Apr 16 23:46:13.686065 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:46:13.686086 systemd[1]: Detected virtualization google. Apr 16 23:46:13.686111 systemd[1]: Detected architecture x86-64. Apr 16 23:46:13.686132 systemd[1]: Detected first boot. Apr 16 23:46:13.686153 systemd[1]: Initializing machine ID from random generator. Apr 16 23:46:13.686175 zram_generator::config[1076]: No configuration found. Apr 16 23:46:13.686196 kernel: Guest personality initialized and is inactive Apr 16 23:46:13.686216 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 16 23:46:13.686242 kernel: Initialized host personality Apr 16 23:46:13.686262 kernel: NET: Registered PF_VSOCK protocol family Apr 16 23:46:13.686284 systemd[1]: Populated /etc with preset unit settings. Apr 16 23:46:13.686307 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 23:46:13.686332 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 23:46:13.686365 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 23:46:13.686388 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 23:46:13.686439 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 23:46:13.686463 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 23:46:13.686486 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 23:46:13.686509 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 23:46:13.686530 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 23:46:13.686552 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 23:46:13.686575 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 23:46:13.686602 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 23:46:13.686621 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:46:13.686639 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:46:13.686659 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 23:46:13.686680 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 23:46:13.686702 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 23:46:13.686730 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:46:13.686750 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 23:46:13.686772 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:46:13.686797 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:46:13.686818 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 23:46:13.686839 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 23:46:13.686859 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 23:46:13.686879 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 23:46:13.686898 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:46:13.688472 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:46:13.688507 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:46:13.688530 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:46:13.688552 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 23:46:13.688573 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 23:46:13.688595 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 23:46:13.688619 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:46:13.688645 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:46:13.688670 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:46:13.688692 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 23:46:13.688714 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 23:46:13.688736 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 23:46:13.688757 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 23:46:13.688779 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:46:13.688806 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 23:46:13.688828 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 23:46:13.688849 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 23:46:13.688871 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 23:46:13.688894 systemd[1]: Reached target machines.target - Containers. Apr 16 23:46:13.688915 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 23:46:13.688937 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:46:13.688959 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:46:13.688984 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 23:46:13.689006 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:46:13.689028 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:46:13.689050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:46:13.689071 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 23:46:13.689092 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:46:13.689115 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 23:46:13.689138 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 23:46:13.689160 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 23:46:13.689185 kernel: ACPI: bus type drm_connector registered Apr 16 23:46:13.689206 kernel: fuse: init (API version 7.41) Apr 16 23:46:13.689226 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 23:46:13.689247 kernel: loop: module loaded Apr 16 23:46:13.689268 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 23:46:13.689291 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:46:13.689313 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:46:13.689334 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:46:13.689368 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:46:13.689445 systemd-journald[1164]: Collecting audit messages is disabled. Apr 16 23:46:13.689492 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 23:46:13.689515 systemd-journald[1164]: Journal started Apr 16 23:46:13.689561 systemd-journald[1164]: Runtime Journal (/run/log/journal/08dde8577095424e88f03c7eb5bcde76) is 8M, max 148.6M, 140.6M free. Apr 16 23:46:13.105975 systemd[1]: Queued start job for default target multi-user.target. Apr 16 23:46:13.115077 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 16 23:46:13.115664 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 23:46:13.719787 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 23:46:13.744557 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:46:13.761440 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 23:46:13.761517 systemd[1]: Stopped verity-setup.service. Apr 16 23:46:13.789445 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:46:13.802469 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:46:13.812039 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 23:46:13.820745 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 23:46:13.829738 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 23:46:13.838718 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 23:46:13.847692 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 23:46:13.856726 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 23:46:13.865883 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 23:46:13.875882 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:46:13.886833 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 23:46:13.887089 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 23:46:13.897825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:46:13.898089 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:46:13.908825 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:46:13.909066 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:46:13.917818 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:46:13.918073 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:46:13.928835 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 23:46:13.929110 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 23:46:13.937850 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:46:13.938114 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:46:13.946927 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:46:13.955907 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:46:13.966895 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 23:46:13.977877 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 23:46:13.988879 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:46:14.011739 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:46:14.021949 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 23:46:14.038521 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 23:46:14.047550 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 23:46:14.047608 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:46:14.057692 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 23:46:14.068764 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 23:46:14.077700 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:46:14.085609 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 23:46:14.097603 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 23:46:14.107597 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:46:14.110286 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 23:46:14.119590 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:46:14.123599 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:46:14.131830 systemd-journald[1164]: Time spent on flushing to /var/log/journal/08dde8577095424e88f03c7eb5bcde76 is 73.850ms for 958 entries. Apr 16 23:46:14.131830 systemd-journald[1164]: System Journal (/var/log/journal/08dde8577095424e88f03c7eb5bcde76) is 8M, max 584.8M, 576.8M free. Apr 16 23:46:14.257107 systemd-journald[1164]: Received client request to flush runtime journal. Apr 16 23:46:14.142753 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 23:46:14.155006 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 23:46:14.169906 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 23:46:14.180702 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 23:46:14.191956 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 23:46:14.203575 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 23:46:14.216876 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 23:46:14.258954 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 23:46:14.286802 kernel: loop0: detected capacity change from 0 to 128560 Apr 16 23:46:14.285926 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 23:46:14.288220 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 23:46:14.300273 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:46:14.343565 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 23:46:14.357822 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:46:14.367443 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 23:46:14.401602 kernel: loop1: detected capacity change from 0 to 50736 Apr 16 23:46:14.438459 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Apr 16 23:46:14.438958 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Apr 16 23:46:14.451490 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:46:14.471567 kernel: loop2: detected capacity change from 0 to 219192 Apr 16 23:46:14.561486 kernel: loop3: detected capacity change from 0 to 110984 Apr 16 23:46:14.647585 kernel: loop4: detected capacity change from 0 to 128560 Apr 16 23:46:14.702203 kernel: loop5: detected capacity change from 0 to 50736 Apr 16 23:46:14.750492 kernel: loop6: detected capacity change from 0 to 219192 Apr 16 23:46:14.794505 kernel: loop7: detected capacity change from 0 to 110984 Apr 16 23:46:14.836481 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Apr 16 23:46:14.837860 (sd-merge)[1225]: Merged extensions into '/usr'. Apr 16 23:46:14.847339 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 23:46:14.847370 systemd[1]: Reloading... Apr 16 23:46:15.020440 zram_generator::config[1247]: No configuration found. Apr 16 23:46:15.214502 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 23:46:15.456979 systemd[1]: Reloading finished in 608 ms. Apr 16 23:46:15.479552 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 23:46:15.488905 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 23:46:15.499893 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 23:46:15.522814 systemd[1]: Starting ensure-sysext.service... Apr 16 23:46:15.535787 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:46:15.547734 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:46:15.576767 systemd[1]: Reload requested from client PID 1292 ('systemctl') (unit ensure-sysext.service)... Apr 16 23:46:15.576790 systemd[1]: Reloading... Apr 16 23:46:15.581345 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 23:46:15.582344 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 23:46:15.582846 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 23:46:15.583383 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 23:46:15.588237 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 23:46:15.588973 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Apr 16 23:46:15.589222 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Apr 16 23:46:15.597076 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:46:15.597222 systemd-tmpfiles[1293]: Skipping /boot Apr 16 23:46:15.613400 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:46:15.613574 systemd-tmpfiles[1293]: Skipping /boot Apr 16 23:46:15.654836 systemd-udevd[1294]: Using default interface naming scheme 'v255'. Apr 16 23:46:15.707479 zram_generator::config[1321]: No configuration found. Apr 16 23:46:16.093571 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 23:46:16.126628 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 16 23:46:16.166485 kernel: ACPI: button: Power Button [PWRF] Apr 16 23:46:16.185437 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 16 23:46:16.248437 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Apr 16 23:46:16.353455 kernel: ACPI: button: Sleep Button [SLPF] Apr 16 23:46:16.376445 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 23:46:16.376624 systemd[1]: Reloading finished in 798 ms. Apr 16 23:46:16.391642 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:46:16.407673 kernel: EDAC MC: Ver: 3.0.0 Apr 16 23:46:16.423471 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:46:16.510862 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Apr 16 23:46:16.519683 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:46:16.523533 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:46:16.534984 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 23:46:16.545795 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:46:16.548722 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:46:16.560953 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:46:16.577916 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:46:16.586738 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:46:16.587472 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:46:16.591372 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 23:46:16.606757 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:46:16.623546 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:46:16.627888 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 23:46:16.628007 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:46:16.634792 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:46:16.635594 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:46:16.657524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:46:16.657826 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:46:16.684562 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:46:16.684894 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:46:16.739260 systemd[1]: Finished ensure-sysext.service. Apr 16 23:46:16.749431 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 23:46:16.758731 augenrules[1446]: No rules Apr 16 23:46:16.760358 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 23:46:16.771194 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:46:16.771537 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:46:16.781146 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 23:46:16.806062 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 16 23:46:16.817885 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:46:16.818168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:46:16.819614 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:46:16.829717 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:46:16.844742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:46:16.860761 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:46:16.876601 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 16 23:46:16.883662 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:46:16.897747 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 23:46:16.907530 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:46:16.907654 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 23:46:16.909084 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 23:46:16.932710 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 23:46:16.936300 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:46:16.953526 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 23:46:16.953587 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:46:16.955195 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:46:16.956591 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:46:16.972953 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:46:16.973248 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:46:16.973826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:46:16.974124 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:46:16.974793 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:46:16.975071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:46:16.977717 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 23:46:16.978246 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 23:46:16.989510 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:46:16.989623 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:46:16.999926 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 16 23:46:17.004075 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Apr 16 23:46:17.034839 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 23:46:17.080380 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Apr 16 23:46:17.101506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:46:17.159269 systemd-networkd[1427]: lo: Link UP Apr 16 23:46:17.159288 systemd-networkd[1427]: lo: Gained carrier Apr 16 23:46:17.162809 systemd-networkd[1427]: Enumeration completed Apr 16 23:46:17.163375 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:46:17.163383 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:46:17.163558 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:46:17.164542 systemd-networkd[1427]: eth0: Link UP Apr 16 23:46:17.164904 systemd-networkd[1427]: eth0: Gained carrier Apr 16 23:46:17.165019 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:46:17.176491 systemd-networkd[1427]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a.c.flatcar-212911.internal' to 'ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a' Apr 16 23:46:17.176514 systemd-networkd[1427]: eth0: DHCPv4 address 10.128.0.50/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 16 23:46:17.176622 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 23:46:17.185547 systemd-resolved[1428]: Positive Trust Anchors: Apr 16 23:46:17.185591 systemd-resolved[1428]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:46:17.185653 systemd-resolved[1428]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:46:17.190058 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 23:46:17.193968 systemd-resolved[1428]: Defaulting to hostname 'linux'. Apr 16 23:46:17.200719 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:46:17.209946 systemd[1]: Reached target network.target - Network. Apr 16 23:46:17.217568 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:46:17.227549 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:46:17.236753 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 23:46:17.246619 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 23:46:17.256603 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 16 23:46:17.266682 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 23:46:17.275617 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 23:46:17.285574 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 23:46:17.295514 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 23:46:17.295574 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:46:17.304505 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:46:17.315285 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 23:46:17.326091 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 23:46:17.335713 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 23:46:17.345696 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 23:46:17.355532 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 23:46:17.375187 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 23:46:17.383948 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 23:46:17.395754 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 23:46:17.405740 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 23:46:17.416173 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:46:17.424514 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:46:17.433641 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:46:17.433697 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:46:17.435259 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 23:46:17.455759 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 16 23:46:17.470743 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 23:46:17.484598 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 23:46:17.505676 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 23:46:17.517781 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 23:46:17.526560 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 23:46:17.529705 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 16 23:46:17.530824 jq[1508]: false Apr 16 23:46:17.542358 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 23:46:17.555360 systemd[1]: Started ntpd.service - Network Time Service. Apr 16 23:46:17.562081 extend-filesystems[1511]: Found /dev/sda6 Apr 16 23:46:17.570120 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 23:46:17.573854 coreos-metadata[1505]: Apr 16 23:46:17.573 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Apr 16 23:46:17.573854 coreos-metadata[1505]: Apr 16 23:46:17.573 INFO Fetch successful Apr 16 23:46:17.573854 coreos-metadata[1505]: Apr 16 23:46:17.573 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Apr 16 23:46:17.573854 coreos-metadata[1505]: Apr 16 23:46:17.573 INFO Fetch successful Apr 16 23:46:17.574888 coreos-metadata[1505]: Apr 16 23:46:17.574 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Apr 16 23:46:17.574956 coreos-metadata[1505]: Apr 16 23:46:17.574 INFO Fetch successful Apr 16 23:46:17.574956 coreos-metadata[1505]: Apr 16 23:46:17.574 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Apr 16 23:46:17.574956 coreos-metadata[1505]: Apr 16 23:46:17.574 INFO Fetch successful Apr 16 23:46:17.578096 google_oslogin_nss_cache[1512]: oslogin_cache_refresh[1512]: Refreshing passwd entry cache Apr 16 23:46:17.578106 oslogin_cache_refresh[1512]: Refreshing passwd entry cache Apr 16 23:46:17.581842 extend-filesystems[1511]: Found /dev/sda9 Apr 16 23:46:17.602627 extend-filesystems[1511]: Checking size of /dev/sda9 Apr 16 23:46:17.590885 oslogin_cache_refresh[1512]: Failure getting users, quitting Apr 16 23:46:17.585602 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 23:46:17.612255 google_oslogin_nss_cache[1512]: oslogin_cache_refresh[1512]: Failure getting users, quitting Apr 16 23:46:17.612255 google_oslogin_nss_cache[1512]: oslogin_cache_refresh[1512]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 23:46:17.612255 google_oslogin_nss_cache[1512]: oslogin_cache_refresh[1512]: Refreshing group entry cache Apr 16 23:46:17.612255 google_oslogin_nss_cache[1512]: oslogin_cache_refresh[1512]: Failure getting groups, quitting Apr 16 23:46:17.612255 google_oslogin_nss_cache[1512]: oslogin_cache_refresh[1512]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 23:46:17.590908 oslogin_cache_refresh[1512]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 23:46:17.598644 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 23:46:17.590964 oslogin_cache_refresh[1512]: Refreshing group entry cache Apr 16 23:46:17.594563 oslogin_cache_refresh[1512]: Failure getting groups, quitting Apr 16 23:46:17.594595 oslogin_cache_refresh[1512]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 23:46:17.618510 extend-filesystems[1511]: Resized partition /dev/sda9 Apr 16 23:46:17.641492 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Apr 16 23:46:17.635554 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 23:46:17.641827 extend-filesystems[1533]: resize2fs 1.47.3 (8-Jul-2025) Apr 16 23:46:17.659570 ntpd[1517]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:32:47 UTC 2026 (1): Starting Apr 16 23:46:17.662359 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:32:47 UTC 2026 (1): Starting Apr 16 23:46:17.662359 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 16 23:46:17.662359 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: ---------------------------------------------------- Apr 16 23:46:17.662359 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: ntp-4 is maintained by Network Time Foundation, Apr 16 23:46:17.662359 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 16 23:46:17.662359 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: corporation. Support and training for ntp-4 are Apr 16 23:46:17.662359 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: available at https://www.nwtime.org/support Apr 16 23:46:17.662359 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: ---------------------------------------------------- Apr 16 23:46:17.652123 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Apr 16 23:46:17.659650 ntpd[1517]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 16 23:46:17.653726 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 23:46:17.659666 ntpd[1517]: ---------------------------------------------------- Apr 16 23:46:17.654855 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 23:46:17.659678 ntpd[1517]: ntp-4 is maintained by Network Time Foundation, Apr 16 23:46:17.659691 ntpd[1517]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 16 23:46:17.659704 ntpd[1517]: corporation. Support and training for ntp-4 are Apr 16 23:46:17.659718 ntpd[1517]: available at https://www.nwtime.org/support Apr 16 23:46:17.659733 ntpd[1517]: ---------------------------------------------------- Apr 16 23:46:17.666056 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: proto: precision = 0.082 usec (-23) Apr 16 23:46:17.665774 ntpd[1517]: proto: precision = 0.082 usec (-23) Apr 16 23:46:17.667653 ntpd[1517]: basedate set to 2026-04-04 Apr 16 23:46:17.669570 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: basedate set to 2026-04-04 Apr 16 23:46:17.669570 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: gps base set to 2026-04-05 (week 2413) Apr 16 23:46:17.669570 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: Listen and drop on 0 v6wildcard [::]:123 Apr 16 23:46:17.669570 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 16 23:46:17.667682 ntpd[1517]: gps base set to 2026-04-05 (week 2413) Apr 16 23:46:17.667838 ntpd[1517]: Listen and drop on 0 v6wildcard [::]:123 Apr 16 23:46:17.668145 ntpd[1517]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 16 23:46:17.670602 ntpd[1517]: Listen normally on 2 lo 127.0.0.1:123 Apr 16 23:46:17.682450 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: Listen normally on 2 lo 127.0.0.1:123 Apr 16 23:46:17.682450 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: Listen normally on 3 eth0 10.128.0.50:123 Apr 16 23:46:17.682450 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: Listen normally on 4 lo [::1]:123 Apr 16 23:46:17.682450 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: bind(21) AF_INET6 [fe80::4001:aff:fe80:32%2]:123 flags 0x811 failed: Cannot assign requested address Apr 16 23:46:17.682450 ntpd[1517]: 16 Apr 23:46:17 ntpd[1517]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:32%2]:123 Apr 16 23:46:17.673623 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 23:46:17.672503 ntpd[1517]: Listen normally on 3 eth0 10.128.0.50:123 Apr 16 23:46:17.672553 ntpd[1517]: Listen normally on 4 lo [::1]:123 Apr 16 23:46:17.672596 ntpd[1517]: bind(21) AF_INET6 [fe80::4001:aff:fe80:32%2]:123 flags 0x811 failed: Cannot assign requested address Apr 16 23:46:17.672625 ntpd[1517]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:32%2]:123 Apr 16 23:46:17.686717 kernel: ntpd[1517]: segfault at 24 ip 0000561ef1a32aeb sp 00007ffd28be7490 error 4 in ntpd[68aeb,561ef19d0000+80000] likely on CPU 1 (core 0, socket 0) Apr 16 23:46:17.687522 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Apr 16 23:46:17.720248 systemd-coredump[1542]: Process 1517 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Apr 16 23:46:17.741255 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 23:46:17.753120 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Apr 16 23:46:17.757032 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 23:46:17.757385 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 23:46:17.766186 jq[1541]: true Apr 16 23:46:17.757900 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 16 23:46:17.758234 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 16 23:46:17.768113 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 23:46:17.768505 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 23:46:17.769328 extend-filesystems[1533]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 16 23:46:17.769328 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 16 23:46:17.769328 extend-filesystems[1533]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Apr 16 23:46:17.815732 extend-filesystems[1511]: Resized filesystem in /dev/sda9 Apr 16 23:46:17.779597 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 23:46:17.842712 update_engine[1538]: I20260416 23:46:17.818764 1538 main.cc:92] Flatcar Update Engine starting Apr 16 23:46:17.780562 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 23:46:17.797927 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 23:46:17.799526 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 23:46:17.857861 jq[1550]: true Apr 16 23:46:17.913130 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 16 23:46:17.924000 (ntainerd)[1563]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 23:46:17.962176 tar[1547]: linux-amd64/LICENSE Apr 16 23:46:17.961882 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Apr 16 23:46:17.962934 tar[1547]: linux-amd64/helm Apr 16 23:46:17.975582 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 23:46:17.979964 systemd[1]: Started systemd-coredump@0-1542-0.service - Process Core Dump (PID 1542/UID 0). Apr 16 23:46:18.016642 bash[1583]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:46:18.019324 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 23:46:18.035100 systemd[1]: Starting sshkeys.service... Apr 16 23:46:18.049396 systemd-logind[1534]: Watching system buttons on /dev/input/event2 (Power Button) Apr 16 23:46:18.056990 systemd-logind[1534]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 16 23:46:18.057050 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 23:46:18.058543 systemd-logind[1534]: New seat seat0. Apr 16 23:46:18.062921 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 23:46:18.100610 sshd_keygen[1543]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 23:46:18.114000 dbus-daemon[1506]: [system] SELinux support is enabled Apr 16 23:46:18.114681 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 23:46:18.136216 dbus-daemon[1506]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1427 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 16 23:46:18.140671 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 16 23:46:18.148887 update_engine[1538]: I20260416 23:46:18.147847 1538 update_check_scheduler.cc:74] Next update check in 11m40s Apr 16 23:46:18.155607 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 16 23:46:18.165612 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 23:46:18.165863 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 23:46:18.176694 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 23:46:18.176906 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 23:46:18.226097 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 23:46:18.233112 dbus-daemon[1506]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 16 23:46:18.236717 systemd[1]: Started update-engine.service - Update Engine. Apr 16 23:46:18.246396 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 23:46:18.248458 systemd-coredump[1582]: Process 1517 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1517: #0 0x0000561ef1a32aeb n/a (ntpd + 0x68aeb) #1 0x0000561ef19dbcdf n/a (ntpd + 0x11cdf) #2 0x0000561ef19dc575 n/a (ntpd + 0x12575) #3 0x0000561ef19d7d8a n/a (ntpd + 0xdd8a) #4 0x0000561ef19d95d3 n/a (ntpd + 0xf5d3) #5 0x0000561ef19e1fd1 n/a (ntpd + 0x17fd1) #6 0x0000561ef19d2c2d n/a (ntpd + 0x8c2d) #7 0x00007f1b2bd1b16c n/a (libc.so.6 + 0x2716c) #8 0x00007f1b2bd1b229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000561ef19d2c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Apr 16 23:46:18.257942 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Apr 16 23:46:18.258204 systemd[1]: ntpd.service: Failed with result 'core-dump'. Apr 16 23:46:18.264044 systemd[1]: systemd-coredump@0-1542-0.service: Deactivated successfully. Apr 16 23:46:18.286346 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 23:46:18.295743 systemd[1]: Started sshd@0-10.128.0.50:22-50.85.169.122:53428.service - OpenSSH per-connection server daemon (50.85.169.122:53428). Apr 16 23:46:18.312524 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 16 23:46:18.327126 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 23:46:18.333864 coreos-metadata[1591]: Apr 16 23:46:18.333 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Apr 16 23:46:18.337532 coreos-metadata[1591]: Apr 16 23:46:18.337 INFO Fetch failed with 404: resource not found Apr 16 23:46:18.337532 coreos-metadata[1591]: Apr 16 23:46:18.337 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Apr 16 23:46:18.339317 coreos-metadata[1591]: Apr 16 23:46:18.339 INFO Fetch successful Apr 16 23:46:18.339428 coreos-metadata[1591]: Apr 16 23:46:18.339 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Apr 16 23:46:18.346270 coreos-metadata[1591]: Apr 16 23:46:18.346 INFO Fetch failed with 404: resource not found Apr 16 23:46:18.346384 coreos-metadata[1591]: Apr 16 23:46:18.346 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Apr 16 23:46:18.346384 coreos-metadata[1591]: Apr 16 23:46:18.346 INFO Fetch failed with 404: resource not found Apr 16 23:46:18.346384 coreos-metadata[1591]: Apr 16 23:46:18.346 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Apr 16 23:46:18.348752 coreos-metadata[1591]: Apr 16 23:46:18.348 INFO Fetch successful Apr 16 23:46:18.354793 unknown[1591]: wrote ssh authorized keys file for user: core Apr 16 23:46:18.362296 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Apr 16 23:46:18.372735 systemd[1]: Started ntpd.service - Network Time Service. Apr 16 23:46:18.383159 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 23:46:18.383534 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 23:46:18.402538 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 23:46:18.448345 update-ssh-keys[1618]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:46:18.450885 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 16 23:46:18.469508 systemd[1]: Finished sshkeys.service. Apr 16 23:46:18.536727 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 23:46:18.553154 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 23:46:18.563350 ntpd[1617]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:32:47 UTC 2026 (1): Starting Apr 16 23:46:18.563857 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:32:47 UTC 2026 (1): Starting Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: ---------------------------------------------------- Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: ntp-4 is maintained by Network Time Foundation, Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: corporation. Support and training for ntp-4 are Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: available at https://www.nwtime.org/support Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: ---------------------------------------------------- Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: proto: precision = 0.077 usec (-24) Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: basedate set to 2026-04-04 Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: gps base set to 2026-04-05 (week 2413) Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: Listen and drop on 0 v6wildcard [::]:123 Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: Listen normally on 2 lo 127.0.0.1:123 Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: Listen normally on 3 eth0 10.128.0.50:123 Apr 16 23:46:18.566159 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: Listen normally on 4 lo [::1]:123 Apr 16 23:46:18.563452 ntpd[1617]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 16 23:46:18.563468 ntpd[1617]: ---------------------------------------------------- Apr 16 23:46:18.584675 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: bind(21) AF_INET6 [fe80::4001:aff:fe80:32%2]:123 flags 0x811 failed: Cannot assign requested address Apr 16 23:46:18.584675 ntpd[1617]: 16 Apr 23:46:18 ntpd[1617]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:32%2]:123 Apr 16 23:46:18.563482 ntpd[1617]: ntp-4 is maintained by Network Time Foundation, Apr 16 23:46:18.563495 ntpd[1617]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 16 23:46:18.563508 ntpd[1617]: corporation. Support and training for ntp-4 are Apr 16 23:46:18.563521 ntpd[1617]: available at https://www.nwtime.org/support Apr 16 23:46:18.563534 ntpd[1617]: ---------------------------------------------------- Apr 16 23:46:18.585030 kernel: ntpd[1617]: segfault at 24 ip 000055a457ae7aeb sp 00007ffc402360f0 error 4 in ntpd[68aeb,55a457a85000+80000] likely on CPU 1 (core 0, socket 0) Apr 16 23:46:18.564590 ntpd[1617]: proto: precision = 0.077 usec (-24) Apr 16 23:46:18.606463 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Apr 16 23:46:18.564910 ntpd[1617]: basedate set to 2026-04-04 Apr 16 23:46:18.564929 ntpd[1617]: gps base set to 2026-04-05 (week 2413) Apr 16 23:46:18.565035 ntpd[1617]: Listen and drop on 0 v6wildcard [::]:123 Apr 16 23:46:18.565073 ntpd[1617]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 16 23:46:18.565300 ntpd[1617]: Listen normally on 2 lo 127.0.0.1:123 Apr 16 23:46:18.565337 ntpd[1617]: Listen normally on 3 eth0 10.128.0.50:123 Apr 16 23:46:18.565379 ntpd[1617]: Listen normally on 4 lo [::1]:123 Apr 16 23:46:18.570147 ntpd[1617]: bind(21) AF_INET6 [fe80::4001:aff:fe80:32%2]:123 flags 0x811 failed: Cannot assign requested address Apr 16 23:46:18.570190 ntpd[1617]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:32%2]:123 Apr 16 23:46:18.609604 systemd-networkd[1427]: eth0: Gained IPv6LL Apr 16 23:46:18.619240 systemd-coredump[1630]: Process 1617 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Apr 16 23:46:18.619696 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 23:46:18.628594 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 23:46:18.647268 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 23:46:18.659424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:46:18.674275 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 23:46:18.686168 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Apr 16 23:46:18.696309 systemd[1]: Started systemd-coredump@1-1630-0.service - Process Core Dump (PID 1630/UID 0). Apr 16 23:46:18.776877 init.sh[1634]: + '[' -e /etc/default/instance_configs.cfg.template ']' Apr 16 23:46:18.776877 init.sh[1634]: + echo -e '[InstanceSetup]\nset_host_keys = false' Apr 16 23:46:18.780680 init.sh[1634]: + /usr/bin/google_instance_setup Apr 16 23:46:18.836734 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 23:46:18.843310 dbus-daemon[1506]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 16 23:46:18.845589 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 16 23:46:18.852035 dbus-daemon[1506]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1609 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 16 23:46:18.865267 systemd[1]: Starting polkit.service - Authorization Manager... Apr 16 23:46:18.958639 containerd[1563]: time="2026-04-16T23:46:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 23:46:18.963201 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 23:46:18.968241 containerd[1563]: time="2026-04-16T23:46:18.963876778Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 23:46:19.007542 systemd-coredump[1635]: Process 1617 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1617: #0 0x000055a457ae7aeb n/a (ntpd + 0x68aeb) #1 0x000055a457a90cdf n/a (ntpd + 0x11cdf) #2 0x000055a457a91575 n/a (ntpd + 0x12575) #3 0x000055a457a8cd8a n/a (ntpd + 0xdd8a) #4 0x000055a457a8e5d3 n/a (ntpd + 0xf5d3) #5 0x000055a457a96fd1 n/a (ntpd + 0x17fd1) #6 0x000055a457a87c2d n/a (ntpd + 0x8c2d) #7 0x00007f7c848e816c n/a (libc.so.6 + 0x2716c) #8 0x00007f7c848e8229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055a457a87c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Apr 16 23:46:19.011931 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Apr 16 23:46:19.012187 systemd[1]: ntpd.service: Failed with result 'core-dump'. Apr 16 23:46:19.023121 systemd[1]: systemd-coredump@1-1630-0.service: Deactivated successfully. Apr 16 23:46:19.075536 containerd[1563]: time="2026-04-16T23:46:19.073400299Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.733µs" Apr 16 23:46:19.075536 containerd[1563]: time="2026-04-16T23:46:19.075534661Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 23:46:19.075974 containerd[1563]: time="2026-04-16T23:46:19.075568159Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 23:46:19.076513 containerd[1563]: time="2026-04-16T23:46:19.076463390Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 23:46:19.076613 containerd[1563]: time="2026-04-16T23:46:19.076532310Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 23:46:19.076613 containerd[1563]: time="2026-04-16T23:46:19.076579484Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:46:19.076709 containerd[1563]: time="2026-04-16T23:46:19.076684063Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:46:19.076709 containerd[1563]: time="2026-04-16T23:46:19.076703714Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:46:19.077080 containerd[1563]: time="2026-04-16T23:46:19.077038452Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:46:19.077080 containerd[1563]: time="2026-04-16T23:46:19.077076392Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:46:19.077213 containerd[1563]: time="2026-04-16T23:46:19.077096658Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:46:19.077213 containerd[1563]: time="2026-04-16T23:46:19.077109956Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 23:46:19.077321 containerd[1563]: time="2026-04-16T23:46:19.077238767Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 23:46:19.079941 containerd[1563]: time="2026-04-16T23:46:19.079900369Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:46:19.080038 containerd[1563]: time="2026-04-16T23:46:19.079969155Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:46:19.080038 containerd[1563]: time="2026-04-16T23:46:19.079991315Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 23:46:19.081590 containerd[1563]: time="2026-04-16T23:46:19.081509743Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 23:46:19.083289 containerd[1563]: time="2026-04-16T23:46:19.082600580Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 23:46:19.083289 containerd[1563]: time="2026-04-16T23:46:19.082706324Z" level=info msg="metadata content store policy set" policy=shared Apr 16 23:46:19.092282 containerd[1563]: time="2026-04-16T23:46:19.092242562Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 23:46:19.092393 containerd[1563]: time="2026-04-16T23:46:19.092323587Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 23:46:19.092470 containerd[1563]: time="2026-04-16T23:46:19.092432891Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 23:46:19.092470 containerd[1563]: time="2026-04-16T23:46:19.092458959Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 23:46:19.093009 containerd[1563]: time="2026-04-16T23:46:19.092478786Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 23:46:19.093009 containerd[1563]: time="2026-04-16T23:46:19.092498734Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 23:46:19.093009 containerd[1563]: time="2026-04-16T23:46:19.092522024Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 23:46:19.093009 containerd[1563]: time="2026-04-16T23:46:19.092542075Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 23:46:19.093009 containerd[1563]: time="2026-04-16T23:46:19.092560905Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 23:46:19.093009 containerd[1563]: time="2026-04-16T23:46:19.092576568Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 23:46:19.093009 containerd[1563]: time="2026-04-16T23:46:19.092594275Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 23:46:19.093009 containerd[1563]: time="2026-04-16T23:46:19.092613617Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 23:46:19.093009 containerd[1563]: time="2026-04-16T23:46:19.092755065Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 23:46:19.093009 containerd[1563]: time="2026-04-16T23:46:19.092781871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 23:46:19.093009 containerd[1563]: time="2026-04-16T23:46:19.092814730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 23:46:19.093009 containerd[1563]: time="2026-04-16T23:46:19.092845842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 23:46:19.093009 containerd[1563]: time="2026-04-16T23:46:19.092865338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 23:46:19.093009 containerd[1563]: time="2026-04-16T23:46:19.092882616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 23:46:19.093662 containerd[1563]: time="2026-04-16T23:46:19.092901894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 23:46:19.093662 containerd[1563]: time="2026-04-16T23:46:19.092917836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 23:46:19.093662 containerd[1563]: time="2026-04-16T23:46:19.092936843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 23:46:19.093662 containerd[1563]: time="2026-04-16T23:46:19.092953404Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 23:46:19.093848 containerd[1563]: time="2026-04-16T23:46:19.092969090Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 23:46:19.093923 containerd[1563]: time="2026-04-16T23:46:19.093893077Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 23:46:19.093980 containerd[1563]: time="2026-04-16T23:46:19.093928466Z" level=info msg="Start snapshots syncer" Apr 16 23:46:19.097020 containerd[1563]: time="2026-04-16T23:46:19.096976844Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 23:46:19.100548 containerd[1563]: time="2026-04-16T23:46:19.100475606Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 23:46:19.100737 containerd[1563]: time="2026-04-16T23:46:19.100575405Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 23:46:19.100737 containerd[1563]: time="2026-04-16T23:46:19.100678758Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 23:46:19.100930 containerd[1563]: time="2026-04-16T23:46:19.100898880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 23:46:19.100998 containerd[1563]: time="2026-04-16T23:46:19.100945953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 23:46:19.100998 containerd[1563]: time="2026-04-16T23:46:19.100967577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 23:46:19.100998 containerd[1563]: time="2026-04-16T23:46:19.100986771Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 23:46:19.101154 containerd[1563]: time="2026-04-16T23:46:19.101009755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 23:46:19.101154 containerd[1563]: time="2026-04-16T23:46:19.101028046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 23:46:19.101154 containerd[1563]: time="2026-04-16T23:46:19.101046958Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 23:46:19.101154 containerd[1563]: time="2026-04-16T23:46:19.101081610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 23:46:19.101154 containerd[1563]: time="2026-04-16T23:46:19.101101647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 23:46:19.101154 containerd[1563]: time="2026-04-16T23:46:19.101119518Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 23:46:19.102802 containerd[1563]: time="2026-04-16T23:46:19.102741807Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:46:19.102905 containerd[1563]: time="2026-04-16T23:46:19.102818163Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:46:19.102905 containerd[1563]: time="2026-04-16T23:46:19.102838230Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:46:19.102905 containerd[1563]: time="2026-04-16T23:46:19.102854732Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:46:19.102905 containerd[1563]: time="2026-04-16T23:46:19.102869504Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 23:46:19.102905 containerd[1563]: time="2026-04-16T23:46:19.102888655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 23:46:19.104530 containerd[1563]: time="2026-04-16T23:46:19.102915715Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 23:46:19.104530 containerd[1563]: time="2026-04-16T23:46:19.102943884Z" level=info msg="runtime interface created" Apr 16 23:46:19.104530 containerd[1563]: time="2026-04-16T23:46:19.102953443Z" level=info msg="created NRI interface" Apr 16 23:46:19.104530 containerd[1563]: time="2026-04-16T23:46:19.102968064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 23:46:19.104530 containerd[1563]: time="2026-04-16T23:46:19.102990671Z" level=info msg="Connect containerd service" Apr 16 23:46:19.104530 containerd[1563]: time="2026-04-16T23:46:19.103032180Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 23:46:19.109561 containerd[1563]: time="2026-04-16T23:46:19.109521696Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:46:19.124294 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Apr 16 23:46:19.132540 systemd[1]: Started ntpd.service - Network Time Service. Apr 16 23:46:19.211039 polkitd[1651]: Started polkitd version 126 Apr 16 23:46:19.237649 ntpd[1665]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:32:47 UTC 2026 (1): Starting Apr 16 23:46:19.237742 ntpd[1665]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 16 23:46:19.238205 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:32:47 UTC 2026 (1): Starting Apr 16 23:46:19.238205 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 16 23:46:19.238205 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: ---------------------------------------------------- Apr 16 23:46:19.238205 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: ntp-4 is maintained by Network Time Foundation, Apr 16 23:46:19.238205 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 16 23:46:19.238205 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: corporation. Support and training for ntp-4 are Apr 16 23:46:19.238205 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: available at https://www.nwtime.org/support Apr 16 23:46:19.238205 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: ---------------------------------------------------- Apr 16 23:46:19.237758 ntpd[1665]: ---------------------------------------------------- Apr 16 23:46:19.237771 ntpd[1665]: ntp-4 is maintained by Network Time Foundation, Apr 16 23:46:19.237798 ntpd[1665]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 16 23:46:19.237812 ntpd[1665]: corporation. Support and training for ntp-4 are Apr 16 23:46:19.237825 ntpd[1665]: available at https://www.nwtime.org/support Apr 16 23:46:19.237840 ntpd[1665]: ---------------------------------------------------- Apr 16 23:46:19.243372 polkitd[1651]: Loading rules from directory /etc/polkit-1/rules.d Apr 16 23:46:19.250179 polkitd[1651]: Loading rules from directory /run/polkit-1/rules.d Apr 16 23:46:19.250256 polkitd[1651]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 16 23:46:19.250865 polkitd[1651]: Loading rules from directory /usr/local/share/polkit-1/rules.d Apr 16 23:46:19.250920 polkitd[1651]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 16 23:46:19.250981 polkitd[1651]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 16 23:46:19.259933 ntpd[1665]: proto: precision = 0.099 usec (-23) Apr 16 23:46:19.260200 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: proto: precision = 0.099 usec (-23) Apr 16 23:46:19.260258 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: basedate set to 2026-04-04 Apr 16 23:46:19.260235 ntpd[1665]: basedate set to 2026-04-04 Apr 16 23:46:19.260372 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: gps base set to 2026-04-05 (week 2413) Apr 16 23:46:19.260254 ntpd[1665]: gps base set to 2026-04-05 (week 2413) Apr 16 23:46:19.260510 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: Listen and drop on 0 v6wildcard [::]:123 Apr 16 23:46:19.260510 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 16 23:46:19.260372 ntpd[1665]: Listen and drop on 0 v6wildcard [::]:123 Apr 16 23:46:19.260445 ntpd[1665]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 16 23:46:19.260680 ntpd[1665]: Listen normally on 2 lo 127.0.0.1:123 Apr 16 23:46:19.260781 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: Listen normally on 2 lo 127.0.0.1:123 Apr 16 23:46:19.260781 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: Listen normally on 3 eth0 10.128.0.50:123 Apr 16 23:46:19.260781 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: Listen normally on 4 lo [::1]:123 Apr 16 23:46:19.260720 ntpd[1665]: Listen normally on 3 eth0 10.128.0.50:123 Apr 16 23:46:19.260976 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:32%2]:123 Apr 16 23:46:19.260976 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: Listening on routing socket on fd #22 for interface updates Apr 16 23:46:19.260762 ntpd[1665]: Listen normally on 4 lo [::1]:123 Apr 16 23:46:19.260813 ntpd[1665]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:32%2]:123 Apr 16 23:46:19.260852 ntpd[1665]: Listening on routing socket on fd #22 for interface updates Apr 16 23:46:19.263503 polkitd[1651]: Finished loading, compiling and executing 2 rules Apr 16 23:46:19.263883 systemd[1]: Started polkit.service - Authorization Manager. Apr 16 23:46:19.264663 dbus-daemon[1506]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 16 23:46:19.265310 polkitd[1651]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 16 23:46:19.286142 ntpd[1665]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 16 23:46:19.286600 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 16 23:46:19.286600 ntpd[1665]: 16 Apr 23:46:19 ntpd[1665]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 16 23:46:19.286190 ntpd[1665]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 16 23:46:19.299012 sshd[1608]: Accepted publickey for core from 50.85.169.122 port 53428 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:46:19.311954 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:46:19.340936 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 23:46:19.353780 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 23:46:19.425648 systemd-logind[1534]: New session 1 of user core. Apr 16 23:46:19.427312 systemd-hostnamed[1609]: Hostname set to (transient) Apr 16 23:46:19.431171 systemd-resolved[1428]: System hostname changed to 'ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a'. Apr 16 23:46:19.446503 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 23:46:19.464828 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 23:46:19.471601 tar[1547]: linux-amd64/README.md Apr 16 23:46:19.511536 (systemd)[1684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 23:46:19.522599 systemd-logind[1534]: New session c1 of user core. Apr 16 23:46:19.530876 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 23:46:19.569655 containerd[1563]: time="2026-04-16T23:46:19.569545660Z" level=info msg="Start subscribing containerd event" Apr 16 23:46:19.570469 containerd[1563]: time="2026-04-16T23:46:19.570302526Z" level=info msg="Start recovering state" Apr 16 23:46:19.571130 containerd[1563]: time="2026-04-16T23:46:19.570920251Z" level=info msg="Start event monitor" Apr 16 23:46:19.571633 containerd[1563]: time="2026-04-16T23:46:19.571246328Z" level=info msg="Start cni network conf syncer for default" Apr 16 23:46:19.571633 containerd[1563]: time="2026-04-16T23:46:19.571275296Z" level=info msg="Start streaming server" Apr 16 23:46:19.571633 containerd[1563]: time="2026-04-16T23:46:19.571249521Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 23:46:19.571942 containerd[1563]: time="2026-04-16T23:46:19.571291831Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 23:46:19.572024 containerd[1563]: time="2026-04-16T23:46:19.572007607Z" level=info msg="runtime interface starting up..." Apr 16 23:46:19.572245 containerd[1563]: time="2026-04-16T23:46:19.572089830Z" level=info msg="starting plugins..." Apr 16 23:46:19.572245 containerd[1563]: time="2026-04-16T23:46:19.572118070Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 23:46:19.573766 containerd[1563]: time="2026-04-16T23:46:19.572901184Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 23:46:19.573766 containerd[1563]: time="2026-04-16T23:46:19.573085003Z" level=info msg="containerd successfully booted in 0.615962s" Apr 16 23:46:19.573575 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 23:46:19.878527 systemd[1684]: Queued start job for default target default.target. Apr 16 23:46:19.884063 systemd[1684]: Created slice app.slice - User Application Slice. Apr 16 23:46:19.884741 systemd[1684]: Reached target paths.target - Paths. Apr 16 23:46:19.884825 systemd[1684]: Reached target timers.target - Timers. Apr 16 23:46:19.888544 systemd[1684]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 23:46:19.917062 systemd[1684]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 23:46:19.917691 systemd[1684]: Reached target sockets.target - Sockets. Apr 16 23:46:19.917949 systemd[1684]: Reached target basic.target - Basic System. Apr 16 23:46:19.918265 systemd[1684]: Reached target default.target - Main User Target. Apr 16 23:46:19.918519 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 23:46:19.918730 systemd[1684]: Startup finished in 368ms. Apr 16 23:46:19.933906 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 23:46:19.975504 instance-setup[1642]: INFO Running google_set_multiqueue. Apr 16 23:46:19.996397 instance-setup[1642]: INFO Set channels for eth0 to 2. Apr 16 23:46:20.002042 instance-setup[1642]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Apr 16 23:46:20.004482 instance-setup[1642]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Apr 16 23:46:20.004777 instance-setup[1642]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Apr 16 23:46:20.006528 instance-setup[1642]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Apr 16 23:46:20.007060 instance-setup[1642]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Apr 16 23:46:20.008765 instance-setup[1642]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Apr 16 23:46:20.009292 instance-setup[1642]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Apr 16 23:46:20.010738 instance-setup[1642]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Apr 16 23:46:20.018810 instance-setup[1642]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Apr 16 23:46:20.023469 instance-setup[1642]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Apr 16 23:46:20.025517 instance-setup[1642]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Apr 16 23:46:20.025570 instance-setup[1642]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Apr 16 23:46:20.048267 init.sh[1634]: + /usr/bin/google_metadata_script_runner --script-type startup Apr 16 23:46:20.251332 startup-script[1730]: INFO Starting startup scripts. Apr 16 23:46:20.256743 startup-script[1730]: INFO No startup scripts found in metadata. Apr 16 23:46:20.256820 startup-script[1730]: INFO Finished running startup scripts. Apr 16 23:46:20.280441 init.sh[1634]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Apr 16 23:46:20.280441 init.sh[1634]: + daemon_pids=() Apr 16 23:46:20.280441 init.sh[1634]: + for d in accounts clock_skew network Apr 16 23:46:20.280441 init.sh[1634]: + daemon_pids+=($!) Apr 16 23:46:20.280441 init.sh[1634]: + for d in accounts clock_skew network Apr 16 23:46:20.280767 init.sh[1734]: + /usr/bin/google_accounts_daemon Apr 16 23:46:20.281130 init.sh[1634]: + daemon_pids+=($!) Apr 16 23:46:20.281130 init.sh[1634]: + for d in accounts clock_skew network Apr 16 23:46:20.281130 init.sh[1634]: + daemon_pids+=($!) Apr 16 23:46:20.281130 init.sh[1634]: + NOTIFY_SOCKET=/run/systemd/notify Apr 16 23:46:20.281130 init.sh[1634]: + /usr/bin/systemd-notify --ready Apr 16 23:46:20.281346 init.sh[1736]: + /usr/bin/google_network_daemon Apr 16 23:46:20.283306 init.sh[1735]: + /usr/bin/google_clock_skew_daemon Apr 16 23:46:20.308222 systemd[1]: Started sshd@1-10.128.0.50:22-50.85.169.122:56386.service - OpenSSH per-connection server daemon (50.85.169.122:56386). Apr 16 23:46:20.319101 systemd[1]: Started oem-gce.service - GCE Linux Agent. Apr 16 23:46:20.332597 init.sh[1634]: + wait -n 1734 1735 1736 Apr 16 23:46:20.636042 google-networking[1736]: INFO Starting Google Networking daemon. Apr 16 23:46:20.715915 google-clock-skew[1735]: INFO Starting Google Clock Skew daemon. Apr 16 23:46:20.725683 google-clock-skew[1735]: INFO Clock drift token has changed: 0. Apr 16 23:46:20.777630 groupadd[1750]: group added to /etc/group: name=google-sudoers, GID=1000 Apr 16 23:46:20.782312 groupadd[1750]: group added to /etc/gshadow: name=google-sudoers Apr 16 23:46:20.834477 groupadd[1750]: new group: name=google-sudoers, GID=1000 Apr 16 23:46:20.865157 google-accounts[1734]: INFO Starting Google Accounts daemon. Apr 16 23:46:20.877584 google-accounts[1734]: WARNING OS Login not installed. Apr 16 23:46:20.879002 google-accounts[1734]: INFO Creating a new user account for 0. Apr 16 23:46:20.884988 init.sh[1758]: useradd: invalid user name '0': use --badname to ignore Apr 16 23:46:20.885299 google-accounts[1734]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Apr 16 23:46:20.984726 sshd[1739]: Accepted publickey for core from 50.85.169.122 port 56386 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:46:20.987093 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:46:20.995857 systemd-logind[1534]: New session 2 of user core. Apr 16 23:46:20.997942 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 23:46:21.027878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:46:21.040327 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 23:46:21.046915 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:46:21.050060 systemd[1]: Startup finished in 4.078s (kernel) + 10.457s (initrd) + 8.856s (userspace) = 23.391s. Apr 16 23:46:21.327292 sshd[1764]: Connection closed by 50.85.169.122 port 56386 Apr 16 23:46:21.329603 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Apr 16 23:46:21.336467 systemd[1]: sshd@1-10.128.0.50:22-50.85.169.122:56386.service: Deactivated successfully. Apr 16 23:46:21.336703 systemd-logind[1534]: Session 2 logged out. Waiting for processes to exit. Apr 16 23:46:21.339826 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 23:46:21.343486 systemd-logind[1534]: Removed session 2. Apr 16 23:46:21.448194 systemd[1]: Started sshd@2-10.128.0.50:22-50.85.169.122:56398.service - OpenSSH per-connection server daemon (50.85.169.122:56398). Apr 16 23:46:22.000307 systemd-resolved[1428]: Clock change detected. Flushing caches. Apr 16 23:46:22.000617 google-clock-skew[1735]: INFO Synced system time with hardware clock. Apr 16 23:46:22.211155 kubelet[1766]: E0416 23:46:22.210086 1766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:46:22.213433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:46:22.213679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:46:22.214139 systemd[1]: kubelet.service: Consumed 1.204s CPU time, 256.9M memory peak. Apr 16 23:46:22.433479 sshd[1780]: Accepted publickey for core from 50.85.169.122 port 56398 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:46:22.434816 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:46:22.442167 systemd-logind[1534]: New session 3 of user core. Apr 16 23:46:22.449296 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 23:46:22.753710 sshd[1785]: Connection closed by 50.85.169.122 port 56398 Apr 16 23:46:22.754921 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Apr 16 23:46:22.760799 systemd[1]: sshd@2-10.128.0.50:22-50.85.169.122:56398.service: Deactivated successfully. Apr 16 23:46:22.763309 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 23:46:22.764902 systemd-logind[1534]: Session 3 logged out. Waiting for processes to exit. Apr 16 23:46:22.766998 systemd-logind[1534]: Removed session 3. Apr 16 23:46:22.872674 systemd[1]: Started sshd@3-10.128.0.50:22-50.85.169.122:56410.service - OpenSSH per-connection server daemon (50.85.169.122:56410). Apr 16 23:46:23.461433 sshd[1791]: Accepted publickey for core from 50.85.169.122 port 56410 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:46:23.463007 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:46:23.470166 systemd-logind[1534]: New session 4 of user core. Apr 16 23:46:23.477310 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 23:46:23.787400 sshd[1794]: Connection closed by 50.85.169.122 port 56410 Apr 16 23:46:23.789381 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Apr 16 23:46:23.795527 systemd[1]: sshd@3-10.128.0.50:22-50.85.169.122:56410.service: Deactivated successfully. Apr 16 23:46:23.798267 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 23:46:23.801177 systemd-logind[1534]: Session 4 logged out. Waiting for processes to exit. Apr 16 23:46:23.802976 systemd-logind[1534]: Removed session 4. Apr 16 23:46:23.905666 systemd[1]: Started sshd@4-10.128.0.50:22-50.85.169.122:56418.service - OpenSSH per-connection server daemon (50.85.169.122:56418). Apr 16 23:46:24.486913 sshd[1800]: Accepted publickey for core from 50.85.169.122 port 56418 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:46:24.488613 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:46:24.495951 systemd-logind[1534]: New session 5 of user core. Apr 16 23:46:24.506298 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 23:46:24.726248 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 23:46:24.726722 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:46:24.740353 sudo[1804]: pam_unix(sudo:session): session closed for user root Apr 16 23:46:24.847123 sshd[1803]: Connection closed by 50.85.169.122 port 56418 Apr 16 23:46:24.849396 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Apr 16 23:46:24.854422 systemd[1]: sshd@4-10.128.0.50:22-50.85.169.122:56418.service: Deactivated successfully. Apr 16 23:46:24.856958 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 23:46:24.859908 systemd-logind[1534]: Session 5 logged out. Waiting for processes to exit. Apr 16 23:46:24.861562 systemd-logind[1534]: Removed session 5. Apr 16 23:46:24.965676 systemd[1]: Started sshd@5-10.128.0.50:22-50.85.169.122:56422.service - OpenSSH per-connection server daemon (50.85.169.122:56422). Apr 16 23:46:25.550797 sshd[1810]: Accepted publickey for core from 50.85.169.122 port 56422 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:46:25.551600 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:46:25.559022 systemd-logind[1534]: New session 6 of user core. Apr 16 23:46:25.568350 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 23:46:25.776757 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 23:46:25.777267 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:46:25.782465 sudo[1815]: pam_unix(sudo:session): session closed for user root Apr 16 23:46:25.795904 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 23:46:25.796391 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:46:25.809119 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:46:25.854252 augenrules[1837]: No rules Apr 16 23:46:25.856067 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:46:25.856423 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:46:25.858662 sudo[1814]: pam_unix(sudo:session): session closed for user root Apr 16 23:46:25.965517 sshd[1813]: Connection closed by 50.85.169.122 port 56422 Apr 16 23:46:25.966436 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Apr 16 23:46:25.971451 systemd[1]: sshd@5-10.128.0.50:22-50.85.169.122:56422.service: Deactivated successfully. Apr 16 23:46:25.973949 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 23:46:25.976598 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. Apr 16 23:46:25.978083 systemd-logind[1534]: Removed session 6. Apr 16 23:46:26.088514 systemd[1]: Started sshd@6-10.128.0.50:22-50.85.169.122:56430.service - OpenSSH per-connection server daemon (50.85.169.122:56430). Apr 16 23:46:26.673154 sshd[1846]: Accepted publickey for core from 50.85.169.122 port 56430 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:46:26.674235 sshd-session[1846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:46:26.681639 systemd-logind[1534]: New session 7 of user core. Apr 16 23:46:26.687303 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 23:46:26.898832 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 23:46:26.899337 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:46:27.380182 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 23:46:27.394712 (dockerd)[1867]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 23:46:27.739686 dockerd[1867]: time="2026-04-16T23:46:27.739326464Z" level=info msg="Starting up" Apr 16 23:46:27.741850 dockerd[1867]: time="2026-04-16T23:46:27.741787366Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 23:46:27.758447 dockerd[1867]: time="2026-04-16T23:46:27.758392990Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 23:46:27.949742 dockerd[1867]: time="2026-04-16T23:46:27.949664055Z" level=info msg="Loading containers: start." Apr 16 23:46:27.968132 kernel: Initializing XFRM netlink socket Apr 16 23:46:28.303251 systemd-networkd[1427]: docker0: Link UP Apr 16 23:46:28.308450 dockerd[1867]: time="2026-04-16T23:46:28.308395450Z" level=info msg="Loading containers: done." Apr 16 23:46:28.325201 dockerd[1867]: time="2026-04-16T23:46:28.325055024Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 23:46:28.325398 dockerd[1867]: time="2026-04-16T23:46:28.325253267Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 23:46:28.325398 dockerd[1867]: time="2026-04-16T23:46:28.325358446Z" level=info msg="Initializing buildkit" Apr 16 23:46:28.357666 dockerd[1867]: time="2026-04-16T23:46:28.357607164Z" level=info msg="Completed buildkit initialization" Apr 16 23:46:28.366797 dockerd[1867]: time="2026-04-16T23:46:28.366753263Z" level=info msg="Daemon has completed initialization" Apr 16 23:46:28.367166 dockerd[1867]: time="2026-04-16T23:46:28.366833990Z" level=info msg="API listen on /run/docker.sock" Apr 16 23:46:28.367061 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 23:46:29.143709 containerd[1563]: time="2026-04-16T23:46:29.143389908Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 16 23:46:29.704283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1978778493.mount: Deactivated successfully. Apr 16 23:46:31.276189 containerd[1563]: time="2026-04-16T23:46:31.276113716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:31.277585 containerd[1563]: time="2026-04-16T23:46:31.277526358Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27101345" Apr 16 23:46:31.279119 containerd[1563]: time="2026-04-16T23:46:31.279027376Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:31.282575 containerd[1563]: time="2026-04-16T23:46:31.282500547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:31.284042 containerd[1563]: time="2026-04-16T23:46:31.283804333Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 2.14036117s" Apr 16 23:46:31.284042 containerd[1563]: time="2026-04-16T23:46:31.283850889Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 16 23:46:31.284924 containerd[1563]: time="2026-04-16T23:46:31.284883501Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 16 23:46:32.464981 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 23:46:32.468182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:46:32.768782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:46:32.782703 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:46:32.828976 containerd[1563]: time="2026-04-16T23:46:32.828070628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:32.834685 containerd[1563]: time="2026-04-16T23:46:32.834620178Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252984" Apr 16 23:46:32.838142 containerd[1563]: time="2026-04-16T23:46:32.838077954Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:32.843881 kubelet[2148]: E0416 23:46:32.843818 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:46:32.846117 containerd[1563]: time="2026-04-16T23:46:32.845635422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:32.848061 containerd[1563]: time="2026-04-16T23:46:32.848017339Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 1.56294126s" Apr 16 23:46:32.848234 containerd[1563]: time="2026-04-16T23:46:32.848208947Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 16 23:46:32.848933 containerd[1563]: time="2026-04-16T23:46:32.848909860Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 16 23:46:32.849974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:46:32.850366 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:46:32.850933 systemd[1]: kubelet.service: Consumed 227ms CPU time, 109.6M memory peak. Apr 16 23:46:34.068766 containerd[1563]: time="2026-04-16T23:46:34.068698942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:34.070180 containerd[1563]: time="2026-04-16T23:46:34.070136800Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15811119" Apr 16 23:46:34.071570 containerd[1563]: time="2026-04-16T23:46:34.071508310Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:34.075062 containerd[1563]: time="2026-04-16T23:46:34.075001515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:34.076464 containerd[1563]: time="2026-04-16T23:46:34.076310029Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.227047726s" Apr 16 23:46:34.076464 containerd[1563]: time="2026-04-16T23:46:34.076350613Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 16 23:46:34.077411 containerd[1563]: time="2026-04-16T23:46:34.077344397Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 16 23:46:35.160269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4181082624.mount: Deactivated successfully. Apr 16 23:46:35.614051 containerd[1563]: time="2026-04-16T23:46:35.613898480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:35.615609 containerd[1563]: time="2026-04-16T23:46:35.615552243Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25973161" Apr 16 23:46:35.616967 containerd[1563]: time="2026-04-16T23:46:35.616904715Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:35.619566 containerd[1563]: time="2026-04-16T23:46:35.619506757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:35.620489 containerd[1563]: time="2026-04-16T23:46:35.620301791Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.542825811s" Apr 16 23:46:35.620489 containerd[1563]: time="2026-04-16T23:46:35.620356122Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 16 23:46:35.621211 containerd[1563]: time="2026-04-16T23:46:35.621110293Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 16 23:46:36.144835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount521092600.mount: Deactivated successfully. Apr 16 23:46:37.442084 containerd[1563]: time="2026-04-16T23:46:37.442000112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:37.443659 containerd[1563]: time="2026-04-16T23:46:37.443598353Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388741" Apr 16 23:46:37.445118 containerd[1563]: time="2026-04-16T23:46:37.445026229Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:37.448631 containerd[1563]: time="2026-04-16T23:46:37.448552524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:37.450299 containerd[1563]: time="2026-04-16T23:46:37.449997633Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.828848698s" Apr 16 23:46:37.450299 containerd[1563]: time="2026-04-16T23:46:37.450042290Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 16 23:46:37.450953 containerd[1563]: time="2026-04-16T23:46:37.450919480Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 23:46:37.882320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3343921884.mount: Deactivated successfully. Apr 16 23:46:37.889822 containerd[1563]: time="2026-04-16T23:46:37.889763582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:37.890845 containerd[1563]: time="2026-04-16T23:46:37.890781127Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321308" Apr 16 23:46:37.892472 containerd[1563]: time="2026-04-16T23:46:37.892408842Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:37.895206 containerd[1563]: time="2026-04-16T23:46:37.895146991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:37.896279 containerd[1563]: time="2026-04-16T23:46:37.896077969Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 445.117297ms" Apr 16 23:46:37.896279 containerd[1563]: time="2026-04-16T23:46:37.896144782Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 16 23:46:37.897183 containerd[1563]: time="2026-04-16T23:46:37.896848537Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 16 23:46:38.345986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2838337957.mount: Deactivated successfully. Apr 16 23:46:39.496938 containerd[1563]: time="2026-04-16T23:46:39.496876282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:39.498393 containerd[1563]: time="2026-04-16T23:46:39.498330988Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22875498" Apr 16 23:46:39.499643 containerd[1563]: time="2026-04-16T23:46:39.499573367Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:39.502871 containerd[1563]: time="2026-04-16T23:46:39.502812848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:46:39.504680 containerd[1563]: time="2026-04-16T23:46:39.504196049Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.607310139s" Apr 16 23:46:39.504680 containerd[1563]: time="2026-04-16T23:46:39.504237664Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 16 23:46:42.977367 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 23:46:42.983366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:46:43.364291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:46:43.375745 (kubelet)[2315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:46:43.445338 kubelet[2315]: E0416 23:46:43.445275 2315 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:46:43.448249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:46:43.448473 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:46:43.449438 systemd[1]: kubelet.service: Consumed 236ms CPU time, 110M memory peak. Apr 16 23:46:43.863537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:46:43.864032 systemd[1]: kubelet.service: Consumed 236ms CPU time, 110M memory peak. Apr 16 23:46:43.867932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:46:43.905660 systemd[1]: Reload requested from client PID 2329 ('systemctl') (unit session-7.scope)... Apr 16 23:46:43.905690 systemd[1]: Reloading... Apr 16 23:46:44.100131 zram_generator::config[2371]: No configuration found. Apr 16 23:46:44.406610 systemd[1]: Reloading finished in 500 ms. Apr 16 23:46:44.469802 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 23:46:44.469945 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 23:46:44.470394 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:46:44.470460 systemd[1]: kubelet.service: Consumed 160ms CPU time, 98.3M memory peak. Apr 16 23:46:44.472490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:46:44.852613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:46:44.862668 (kubelet)[2425]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:46:44.919827 kubelet[2425]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 23:46:44.919827 kubelet[2425]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:46:44.920397 kubelet[2425]: I0416 23:46:44.919863 2425 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 23:46:45.591189 kubelet[2425]: I0416 23:46:45.591128 2425 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 23:46:45.591189 kubelet[2425]: I0416 23:46:45.591162 2425 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:46:45.593507 kubelet[2425]: I0416 23:46:45.593463 2425 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 23:46:45.593507 kubelet[2425]: I0416 23:46:45.593498 2425 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:46:45.593879 kubelet[2425]: I0416 23:46:45.593841 2425 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 23:46:45.604642 kubelet[2425]: E0416 23:46:45.604573 2425 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 23:46:45.605448 kubelet[2425]: I0416 23:46:45.605283 2425 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:46:45.611351 kubelet[2425]: I0416 23:46:45.611326 2425 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:46:45.615229 kubelet[2425]: I0416 23:46:45.615186 2425 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 23:46:45.617067 kubelet[2425]: I0416 23:46:45.616983 2425 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:46:45.617296 kubelet[2425]: I0416 23:46:45.617046 2425 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:46:45.617296 kubelet[2425]: I0416 23:46:45.617288 2425 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 23:46:45.617296 kubelet[2425]: I0416 23:46:45.617305 2425 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 23:46:45.617588 kubelet[2425]: I0416 23:46:45.617426 2425 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 23:46:45.619510 kubelet[2425]: I0416 23:46:45.619472 2425 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:46:45.619718 kubelet[2425]: I0416 23:46:45.619701 2425 kubelet.go:475] "Attempting to sync node with API server" Apr 16 23:46:45.619809 kubelet[2425]: I0416 23:46:45.619723 2425 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:46:45.619809 kubelet[2425]: I0416 23:46:45.619757 2425 kubelet.go:387] "Adding apiserver pod source" Apr 16 23:46:45.619809 kubelet[2425]: I0416 23:46:45.619772 2425 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:46:45.623016 kubelet[2425]: E0416 23:46:45.622987 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 23:46:45.623702 kubelet[2425]: E0416 23:46:45.623665 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a&limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 23:46:45.623828 kubelet[2425]: I0416 23:46:45.623805 2425 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:46:45.624624 kubelet[2425]: I0416 23:46:45.624583 2425 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:46:45.624714 kubelet[2425]: I0416 23:46:45.624631 2425 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 23:46:45.624714 kubelet[2425]: W0416 23:46:45.624701 2425 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 23:46:45.642244 kubelet[2425]: I0416 23:46:45.642221 2425 server.go:1262] "Started kubelet" Apr 16 23:46:45.644360 kubelet[2425]: I0416 23:46:45.644338 2425 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 23:46:45.645531 kubelet[2425]: I0416 23:46:45.645410 2425 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:46:45.652816 kubelet[2425]: I0416 23:46:45.652439 2425 server.go:310] "Adding debug handlers to kubelet server" Apr 16 23:46:45.657762 kubelet[2425]: E0416 23:46:45.655189 2425 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a.18a6fb1d25459143 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a,UID:ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a,},FirstTimestamp:2026-04-16 23:46:45.642178883 +0000 UTC m=+0.774335719,LastTimestamp:2026-04-16 23:46:45.642178883 +0000 UTC m=+0.774335719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a,}" Apr 16 23:46:45.659267 kubelet[2425]: I0416 23:46:45.659232 2425 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:46:45.659481 kubelet[2425]: I0416 23:46:45.659459 2425 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 23:46:45.659783 kubelet[2425]: I0416 23:46:45.659764 2425 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:46:45.660243 kubelet[2425]: I0416 23:46:45.660221 2425 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:46:45.661493 kubelet[2425]: I0416 23:46:45.661464 2425 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 23:46:45.661784 kubelet[2425]: E0416 23:46:45.661758 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" not found" Apr 16 23:46:45.663237 kubelet[2425]: I0416 23:46:45.663212 2425 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 23:46:45.663339 kubelet[2425]: I0416 23:46:45.663283 2425 reconciler.go:29] "Reconciler: start to sync state" Apr 16 23:46:45.665922 kubelet[2425]: E0416 23:46:45.665892 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 23:46:45.666202 kubelet[2425]: E0416 23:46:45.666165 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a?timeout=10s\": dial tcp 10.128.0.50:6443: connect: connection refused" interval="200ms" Apr 16 23:46:45.667400 kubelet[2425]: I0416 23:46:45.667377 2425 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:46:45.667573 kubelet[2425]: E0416 23:46:45.667547 2425 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:46:45.667721 kubelet[2425]: I0416 23:46:45.667698 2425 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:46:45.669889 kubelet[2425]: I0416 23:46:45.669844 2425 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:46:45.691422 kubelet[2425]: I0416 23:46:45.691233 2425 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 23:46:45.693235 kubelet[2425]: I0416 23:46:45.693212 2425 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 23:46:45.694685 kubelet[2425]: I0416 23:46:45.694302 2425 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 23:46:45.694685 kubelet[2425]: I0416 23:46:45.694351 2425 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 23:46:45.694685 kubelet[2425]: E0416 23:46:45.694413 2425 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:46:45.701491 kubelet[2425]: E0416 23:46:45.701454 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 23:46:45.713565 kubelet[2425]: I0416 23:46:45.713540 2425 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 23:46:45.713661 kubelet[2425]: I0416 23:46:45.713571 2425 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 23:46:45.713661 kubelet[2425]: I0416 23:46:45.713594 2425 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:46:45.715701 kubelet[2425]: I0416 23:46:45.715635 2425 policy_none.go:49] "None policy: Start" Apr 16 23:46:45.715701 kubelet[2425]: I0416 23:46:45.715655 2425 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 23:46:45.715701 kubelet[2425]: I0416 23:46:45.715673 2425 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 23:46:45.718417 kubelet[2425]: I0416 23:46:45.718338 2425 policy_none.go:47] "Start" Apr 16 23:46:45.724706 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 23:46:45.747986 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 23:46:45.752628 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 23:46:45.761081 kubelet[2425]: E0416 23:46:45.761052 2425 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:46:45.761342 kubelet[2425]: I0416 23:46:45.761319 2425 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 23:46:45.761414 kubelet[2425]: I0416 23:46:45.761345 2425 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:46:45.761992 kubelet[2425]: I0416 23:46:45.761971 2425 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 23:46:45.764124 kubelet[2425]: E0416 23:46:45.764075 2425 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:46:45.764207 kubelet[2425]: E0416 23:46:45.764145 2425 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" not found" Apr 16 23:46:45.816771 systemd[1]: Created slice kubepods-burstable-pode81ddcc29cb53559a19fdd83e9dfdcf4.slice - libcontainer container kubepods-burstable-pode81ddcc29cb53559a19fdd83e9dfdcf4.slice. Apr 16 23:46:45.832167 kubelet[2425]: E0416 23:46:45.832130 2425 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" not found" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:45.837600 systemd[1]: Created slice kubepods-burstable-podef27acac9267c78d7d0b6c509dba1ac2.slice - libcontainer container kubepods-burstable-podef27acac9267c78d7d0b6c509dba1ac2.slice. Apr 16 23:46:45.840266 kubelet[2425]: E0416 23:46:45.840224 2425 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" not found" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:45.852904 systemd[1]: Created slice kubepods-burstable-podf8db5cf8045feff62d8efdb63a1260be.slice - libcontainer container kubepods-burstable-podf8db5cf8045feff62d8efdb63a1260be.slice. Apr 16 23:46:45.857647 kubelet[2425]: E0416 23:46:45.857612 2425 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" not found" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:45.866315 kubelet[2425]: I0416 23:46:45.866276 2425 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:45.866803 kubelet[2425]: E0416 23:46:45.866766 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.50:6443/api/v1/nodes\": dial tcp 10.128.0.50:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:45.866925 kubelet[2425]: E0416 23:46:45.866875 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a?timeout=10s\": dial tcp 10.128.0.50:6443: connect: connection refused" interval="400ms" Apr 16 23:46:45.964791 kubelet[2425]: I0416 23:46:45.964730 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef27acac9267c78d7d0b6c509dba1ac2-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"ef27acac9267c78d7d0b6c509dba1ac2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:45.965368 kubelet[2425]: I0416 23:46:45.964787 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef27acac9267c78d7d0b6c509dba1ac2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"ef27acac9267c78d7d0b6c509dba1ac2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:45.965368 kubelet[2425]: I0416 23:46:45.964835 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e81ddcc29cb53559a19fdd83e9dfdcf4-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"e81ddcc29cb53559a19fdd83e9dfdcf4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:45.965368 kubelet[2425]: I0416 23:46:45.964860 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef27acac9267c78d7d0b6c509dba1ac2-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"ef27acac9267c78d7d0b6c509dba1ac2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:45.965368 kubelet[2425]: I0416 23:46:45.964889 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8db5cf8045feff62d8efdb63a1260be-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"f8db5cf8045feff62d8efdb63a1260be\") " pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:45.965597 kubelet[2425]: I0416 23:46:45.964913 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e81ddcc29cb53559a19fdd83e9dfdcf4-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"e81ddcc29cb53559a19fdd83e9dfdcf4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:45.965597 kubelet[2425]: I0416 23:46:45.964943 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e81ddcc29cb53559a19fdd83e9dfdcf4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"e81ddcc29cb53559a19fdd83e9dfdcf4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:45.965597 kubelet[2425]: I0416 23:46:45.964977 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ef27acac9267c78d7d0b6c509dba1ac2-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"ef27acac9267c78d7d0b6c509dba1ac2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:45.965597 kubelet[2425]: I0416 23:46:45.965014 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef27acac9267c78d7d0b6c509dba1ac2-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"ef27acac9267c78d7d0b6c509dba1ac2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:46.072703 kubelet[2425]: I0416 23:46:46.072642 2425 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:46.073085 kubelet[2425]: E0416 23:46:46.073037 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.50:6443/api/v1/nodes\": dial tcp 10.128.0.50:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:46.136435 containerd[1563]: time="2026-04-16T23:46:46.136287631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a,Uid:e81ddcc29cb53559a19fdd83e9dfdcf4,Namespace:kube-system,Attempt:0,}" Apr 16 23:46:46.144353 containerd[1563]: time="2026-04-16T23:46:46.144291419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a,Uid:ef27acac9267c78d7d0b6c509dba1ac2,Namespace:kube-system,Attempt:0,}" Apr 16 23:46:46.161504 containerd[1563]: time="2026-04-16T23:46:46.161453186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a,Uid:f8db5cf8045feff62d8efdb63a1260be,Namespace:kube-system,Attempt:0,}" Apr 16 23:46:46.268062 kubelet[2425]: E0416 23:46:46.268003 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a?timeout=10s\": dial tcp 10.128.0.50:6443: connect: connection refused" interval="800ms" Apr 16 23:46:46.452503 kubelet[2425]: E0416 23:46:46.452360 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a&limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 23:46:46.478069 kubelet[2425]: I0416 23:46:46.478022 2425 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:46.478613 kubelet[2425]: E0416 23:46:46.478563 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.50:6443/api/v1/nodes\": dial tcp 10.128.0.50:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:46.595640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3671595511.mount: Deactivated successfully. Apr 16 23:46:46.603838 containerd[1563]: time="2026-04-16T23:46:46.603782047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:46:46.609295 containerd[1563]: time="2026-04-16T23:46:46.609238386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321228" Apr 16 23:46:46.610625 containerd[1563]: time="2026-04-16T23:46:46.610575057Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:46:46.611593 containerd[1563]: time="2026-04-16T23:46:46.611517320Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:46:46.613931 containerd[1563]: time="2026-04-16T23:46:46.613846720Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:46:46.615454 containerd[1563]: time="2026-04-16T23:46:46.615408549Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 23:46:46.616633 containerd[1563]: time="2026-04-16T23:46:46.616586698Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 23:46:46.619115 containerd[1563]: time="2026-04-16T23:46:46.618013383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:46:46.619228 containerd[1563]: time="2026-04-16T23:46:46.619196337Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 473.017742ms" Apr 16 23:46:46.623721 containerd[1563]: time="2026-04-16T23:46:46.623683677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 460.514735ms" Apr 16 23:46:46.627134 containerd[1563]: time="2026-04-16T23:46:46.627070832Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 487.624658ms" Apr 16 23:46:46.679022 containerd[1563]: time="2026-04-16T23:46:46.678477853Z" level=info msg="connecting to shim 56d5d82ccc1ed27896cfe7b45525b58aad98ba0b54e2dee297c020d6db54baef" address="unix:///run/containerd/s/4512bd64f33cd8bfa73f5086181ac2147aef09cfb5a694c54a4fcdff165c7f33" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:46:46.688246 containerd[1563]: time="2026-04-16T23:46:46.688196832Z" level=info msg="connecting to shim e3ee124573103503dcd3a3e47313eba697b6c8348ef51ef426084c7a6b2a86b5" address="unix:///run/containerd/s/6f2f523a65a29809ff9c3276d7f46bc2af878e870aafe651972cb11b387ef8f9" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:46:46.692275 containerd[1563]: time="2026-04-16T23:46:46.692211523Z" level=info msg="connecting to shim f10ef8c3432b6e221f51cfc615a8e02b0c9335ff572bd0273fab9c1d61e088ca" address="unix:///run/containerd/s/09d394589a904b4eedabff01bd16da6fc13d7997a54221cfb462da91d5160d04" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:46:46.734656 systemd[1]: Started cri-containerd-56d5d82ccc1ed27896cfe7b45525b58aad98ba0b54e2dee297c020d6db54baef.scope - libcontainer container 56d5d82ccc1ed27896cfe7b45525b58aad98ba0b54e2dee297c020d6db54baef. Apr 16 23:46:46.743715 systemd[1]: Started cri-containerd-e3ee124573103503dcd3a3e47313eba697b6c8348ef51ef426084c7a6b2a86b5.scope - libcontainer container e3ee124573103503dcd3a3e47313eba697b6c8348ef51ef426084c7a6b2a86b5. Apr 16 23:46:46.769016 systemd[1]: Started cri-containerd-f10ef8c3432b6e221f51cfc615a8e02b0c9335ff572bd0273fab9c1d61e088ca.scope - libcontainer container f10ef8c3432b6e221f51cfc615a8e02b0c9335ff572bd0273fab9c1d61e088ca. Apr 16 23:46:46.849239 kubelet[2425]: E0416 23:46:46.849062 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 23:46:46.888159 kubelet[2425]: E0416 23:46:46.888045 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 23:46:46.894945 containerd[1563]: time="2026-04-16T23:46:46.894890729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a,Uid:ef27acac9267c78d7d0b6c509dba1ac2,Namespace:kube-system,Attempt:0,} returns sandbox id \"56d5d82ccc1ed27896cfe7b45525b58aad98ba0b54e2dee297c020d6db54baef\"" Apr 16 23:46:46.899628 kubelet[2425]: E0416 23:46:46.899575 2425 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b" Apr 16 23:46:46.900452 containerd[1563]: time="2026-04-16T23:46:46.900407880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a,Uid:f8db5cf8045feff62d8efdb63a1260be,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3ee124573103503dcd3a3e47313eba697b6c8348ef51ef426084c7a6b2a86b5\"" Apr 16 23:46:46.902785 kubelet[2425]: E0416 23:46:46.902750 2425 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039" Apr 16 23:46:46.907177 containerd[1563]: time="2026-04-16T23:46:46.906077087Z" level=info msg="CreateContainer within sandbox \"56d5d82ccc1ed27896cfe7b45525b58aad98ba0b54e2dee297c020d6db54baef\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 23:46:46.908748 containerd[1563]: time="2026-04-16T23:46:46.908715656Z" level=info msg="CreateContainer within sandbox \"e3ee124573103503dcd3a3e47313eba697b6c8348ef51ef426084c7a6b2a86b5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 23:46:46.916034 containerd[1563]: time="2026-04-16T23:46:46.915971200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a,Uid:e81ddcc29cb53559a19fdd83e9dfdcf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f10ef8c3432b6e221f51cfc615a8e02b0c9335ff572bd0273fab9c1d61e088ca\"" Apr 16 23:46:46.918352 kubelet[2425]: E0416 23:46:46.918318 2425 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039" Apr 16 23:46:46.921069 containerd[1563]: time="2026-04-16T23:46:46.921005662Z" level=info msg="Container 0aea477fd31becf5b653c76fbff76bae95e83791c0201856c08e3b40ff2f4974: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:46:46.923456 containerd[1563]: time="2026-04-16T23:46:46.923273085Z" level=info msg="CreateContainer within sandbox \"f10ef8c3432b6e221f51cfc615a8e02b0c9335ff572bd0273fab9c1d61e088ca\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 23:46:46.924843 containerd[1563]: time="2026-04-16T23:46:46.924785102Z" level=info msg="Container 577e52f88d138079b9a68d8873d0f2ac2a8e185da3f438012a63a48c4b5b66cf: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:46:46.935010 containerd[1563]: time="2026-04-16T23:46:46.934960772Z" level=info msg="Container d4f51f8f704acf1391da99cd6d819a2ff9727c5eaa735a79d38cd1312b5d8b6c: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:46:46.940088 containerd[1563]: time="2026-04-16T23:46:46.940035402Z" level=info msg="CreateContainer within sandbox \"e3ee124573103503dcd3a3e47313eba697b6c8348ef51ef426084c7a6b2a86b5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0aea477fd31becf5b653c76fbff76bae95e83791c0201856c08e3b40ff2f4974\"" Apr 16 23:46:46.941610 containerd[1563]: time="2026-04-16T23:46:46.941536255Z" level=info msg="CreateContainer within sandbox \"56d5d82ccc1ed27896cfe7b45525b58aad98ba0b54e2dee297c020d6db54baef\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"577e52f88d138079b9a68d8873d0f2ac2a8e185da3f438012a63a48c4b5b66cf\"" Apr 16 23:46:46.942160 containerd[1563]: time="2026-04-16T23:46:46.942069407Z" level=info msg="StartContainer for \"0aea477fd31becf5b653c76fbff76bae95e83791c0201856c08e3b40ff2f4974\"" Apr 16 23:46:46.943800 containerd[1563]: time="2026-04-16T23:46:46.943750052Z" level=info msg="connecting to shim 0aea477fd31becf5b653c76fbff76bae95e83791c0201856c08e3b40ff2f4974" address="unix:///run/containerd/s/6f2f523a65a29809ff9c3276d7f46bc2af878e870aafe651972cb11b387ef8f9" protocol=ttrpc version=3 Apr 16 23:46:46.945132 containerd[1563]: time="2026-04-16T23:46:46.944700102Z" level=info msg="StartContainer for \"577e52f88d138079b9a68d8873d0f2ac2a8e185da3f438012a63a48c4b5b66cf\"" Apr 16 23:46:46.946336 containerd[1563]: time="2026-04-16T23:46:46.946302550Z" level=info msg="connecting to shim 577e52f88d138079b9a68d8873d0f2ac2a8e185da3f438012a63a48c4b5b66cf" address="unix:///run/containerd/s/4512bd64f33cd8bfa73f5086181ac2147aef09cfb5a694c54a4fcdff165c7f33" protocol=ttrpc version=3 Apr 16 23:46:46.948882 containerd[1563]: time="2026-04-16T23:46:46.948848006Z" level=info msg="CreateContainer within sandbox \"f10ef8c3432b6e221f51cfc615a8e02b0c9335ff572bd0273fab9c1d61e088ca\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d4f51f8f704acf1391da99cd6d819a2ff9727c5eaa735a79d38cd1312b5d8b6c\"" Apr 16 23:46:46.951329 containerd[1563]: time="2026-04-16T23:46:46.951295642Z" level=info msg="StartContainer for \"d4f51f8f704acf1391da99cd6d819a2ff9727c5eaa735a79d38cd1312b5d8b6c\"" Apr 16 23:46:46.952924 containerd[1563]: time="2026-04-16T23:46:46.952887353Z" level=info msg="connecting to shim d4f51f8f704acf1391da99cd6d819a2ff9727c5eaa735a79d38cd1312b5d8b6c" address="unix:///run/containerd/s/09d394589a904b4eedabff01bd16da6fc13d7997a54221cfb462da91d5160d04" protocol=ttrpc version=3 Apr 16 23:46:46.986366 systemd[1]: Started cri-containerd-0aea477fd31becf5b653c76fbff76bae95e83791c0201856c08e3b40ff2f4974.scope - libcontainer container 0aea477fd31becf5b653c76fbff76bae95e83791c0201856c08e3b40ff2f4974. Apr 16 23:46:47.004319 systemd[1]: Started cri-containerd-577e52f88d138079b9a68d8873d0f2ac2a8e185da3f438012a63a48c4b5b66cf.scope - libcontainer container 577e52f88d138079b9a68d8873d0f2ac2a8e185da3f438012a63a48c4b5b66cf. Apr 16 23:46:47.016322 systemd[1]: Started cri-containerd-d4f51f8f704acf1391da99cd6d819a2ff9727c5eaa735a79d38cd1312b5d8b6c.scope - libcontainer container d4f51f8f704acf1391da99cd6d819a2ff9727c5eaa735a79d38cd1312b5d8b6c. Apr 16 23:46:47.061125 kubelet[2425]: E0416 23:46:47.061056 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 23:46:47.070201 kubelet[2425]: E0416 23:46:47.069955 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a?timeout=10s\": dial tcp 10.128.0.50:6443: connect: connection refused" interval="1.6s" Apr 16 23:46:47.109905 containerd[1563]: time="2026-04-16T23:46:47.109860815Z" level=info msg="StartContainer for \"0aea477fd31becf5b653c76fbff76bae95e83791c0201856c08e3b40ff2f4974\" returns successfully" Apr 16 23:46:47.150247 containerd[1563]: time="2026-04-16T23:46:47.150136072Z" level=info msg="StartContainer for \"577e52f88d138079b9a68d8873d0f2ac2a8e185da3f438012a63a48c4b5b66cf\" returns successfully" Apr 16 23:46:47.162622 containerd[1563]: time="2026-04-16T23:46:47.162571686Z" level=info msg="StartContainer for \"d4f51f8f704acf1391da99cd6d819a2ff9727c5eaa735a79d38cd1312b5d8b6c\" returns successfully" Apr 16 23:46:47.285672 kubelet[2425]: I0416 23:46:47.285544 2425 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:47.723523 kubelet[2425]: E0416 23:46:47.723386 2425 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" not found" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:47.727687 kubelet[2425]: E0416 23:46:47.727652 2425 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" not found" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:47.732545 kubelet[2425]: E0416 23:46:47.732378 2425 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" not found" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:48.736390 kubelet[2425]: E0416 23:46:48.736131 2425 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" not found" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:48.738118 kubelet[2425]: E0416 23:46:48.738066 2425 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" not found" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:49.325079 kubelet[2425]: E0416 23:46:49.325026 2425 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" not found" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:49.433430 kubelet[2425]: I0416 23:46:49.433364 2425 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:49.462441 kubelet[2425]: I0416 23:46:49.462405 2425 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:49.470109 kubelet[2425]: E0416 23:46:49.469907 2425 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:49.470109 kubelet[2425]: I0416 23:46:49.469933 2425 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:49.471623 kubelet[2425]: E0416 23:46:49.471576 2425 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:49.471906 kubelet[2425]: I0416 23:46:49.471602 2425 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:49.477438 kubelet[2425]: E0416 23:46:49.477400 2425 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:49.625414 kubelet[2425]: I0416 23:46:49.624734 2425 apiserver.go:52] "Watching apiserver" Apr 16 23:46:49.663373 kubelet[2425]: I0416 23:46:49.663314 2425 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 23:46:49.844772 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 16 23:46:51.344916 systemd[1]: Reload requested from client PID 2711 ('systemctl') (unit session-7.scope)... Apr 16 23:46:51.344939 systemd[1]: Reloading... Apr 16 23:46:51.475538 zram_generator::config[2751]: No configuration found. Apr 16 23:46:51.879590 systemd[1]: Reloading finished in 533 ms. Apr 16 23:46:51.914878 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:46:51.935689 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 23:46:51.936175 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:46:51.936385 systemd[1]: kubelet.service: Consumed 1.277s CPU time, 125.7M memory peak. Apr 16 23:46:51.939544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:46:52.260068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:46:52.281024 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:46:52.353123 kubelet[2803]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 23:46:52.353123 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:46:52.353123 kubelet[2803]: I0416 23:46:52.351767 2803 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 23:46:52.370853 kubelet[2803]: I0416 23:46:52.370803 2803 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 23:46:52.370853 kubelet[2803]: I0416 23:46:52.370843 2803 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:46:52.371106 kubelet[2803]: I0416 23:46:52.370878 2803 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 23:46:52.371106 kubelet[2803]: I0416 23:46:52.370888 2803 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:46:52.371106 kubelet[2803]: I0416 23:46:52.371224 2803 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 23:46:52.373249 kubelet[2803]: I0416 23:46:52.373223 2803 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 23:46:52.376499 kubelet[2803]: I0416 23:46:52.376432 2803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:46:52.385145 kubelet[2803]: I0416 23:46:52.385118 2803 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:46:52.390169 kubelet[2803]: I0416 23:46:52.389832 2803 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 23:46:52.390280 kubelet[2803]: I0416 23:46:52.390191 2803 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:46:52.390466 kubelet[2803]: I0416 23:46:52.390221 2803 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:46:52.390466 kubelet[2803]: I0416 23:46:52.390447 2803 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 23:46:52.390466 kubelet[2803]: I0416 23:46:52.390463 2803 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 23:46:52.390731 kubelet[2803]: I0416 23:46:52.390511 2803 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 23:46:52.390832 kubelet[2803]: I0416 23:46:52.390810 2803 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:46:52.392148 kubelet[2803]: I0416 23:46:52.391041 2803 kubelet.go:475] "Attempting to sync node with API server" Apr 16 23:46:52.392148 kubelet[2803]: I0416 23:46:52.391062 2803 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:46:52.392148 kubelet[2803]: I0416 23:46:52.391106 2803 kubelet.go:387] "Adding apiserver pod source" Apr 16 23:46:52.392148 kubelet[2803]: I0416 23:46:52.391123 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:46:52.395015 kubelet[2803]: I0416 23:46:52.392690 2803 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:46:52.395015 kubelet[2803]: I0416 23:46:52.393556 2803 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:46:52.395015 kubelet[2803]: I0416 23:46:52.393609 2803 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 23:46:52.447553 kubelet[2803]: I0416 23:46:52.446740 2803 server.go:1262] "Started kubelet" Apr 16 23:46:52.452790 kubelet[2803]: I0416 23:46:52.452584 2803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 23:46:52.457035 kubelet[2803]: I0416 23:46:52.456995 2803 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:46:52.466168 kubelet[2803]: I0416 23:46:52.466086 2803 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 23:46:52.469141 kubelet[2803]: I0416 23:46:52.469079 2803 server.go:310] "Adding debug handlers to kubelet server" Apr 16 23:46:52.470810 kubelet[2803]: I0416 23:46:52.470754 2803 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 23:46:52.470941 kubelet[2803]: I0416 23:46:52.470918 2803 reconciler.go:29] "Reconciler: start to sync state" Apr 16 23:46:52.474600 kubelet[2803]: I0416 23:46:52.473991 2803 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:46:52.474600 kubelet[2803]: I0416 23:46:52.474051 2803 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 23:46:52.474600 kubelet[2803]: I0416 23:46:52.474372 2803 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:46:52.474788 kubelet[2803]: I0416 23:46:52.474638 2803 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:46:52.488152 kubelet[2803]: I0416 23:46:52.486303 2803 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:46:52.488152 kubelet[2803]: I0416 23:46:52.486440 2803 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:46:52.491476 kubelet[2803]: I0416 23:46:52.491453 2803 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:46:52.495208 kubelet[2803]: E0416 23:46:52.495167 2803 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:46:52.522899 kubelet[2803]: I0416 23:46:52.522765 2803 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 23:46:52.528350 kubelet[2803]: I0416 23:46:52.526805 2803 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 23:46:52.528350 kubelet[2803]: I0416 23:46:52.526941 2803 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 23:46:52.528350 kubelet[2803]: I0416 23:46:52.526969 2803 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 23:46:52.528350 kubelet[2803]: E0416 23:46:52.527213 2803 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:46:52.592978 kubelet[2803]: I0416 23:46:52.592947 2803 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 23:46:52.593485 kubelet[2803]: I0416 23:46:52.593261 2803 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 23:46:52.593485 kubelet[2803]: I0416 23:46:52.593306 2803 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:46:52.594147 kubelet[2803]: I0416 23:46:52.594083 2803 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 23:46:52.594444 kubelet[2803]: I0416 23:46:52.594230 2803 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 23:46:52.594659 kubelet[2803]: I0416 23:46:52.594584 2803 policy_none.go:49] "None policy: Start" Apr 16 23:46:52.594659 kubelet[2803]: I0416 23:46:52.594606 2803 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 23:46:52.594659 kubelet[2803]: I0416 23:46:52.594627 2803 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 23:46:52.594922 kubelet[2803]: I0416 23:46:52.594822 2803 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 23:46:52.594922 kubelet[2803]: I0416 23:46:52.594842 2803 policy_none.go:47] "Start" Apr 16 23:46:52.603313 kubelet[2803]: E0416 23:46:52.603281 2803 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:46:52.603534 kubelet[2803]: I0416 23:46:52.603501 2803 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 23:46:52.603616 kubelet[2803]: I0416 23:46:52.603527 2803 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:46:52.605863 kubelet[2803]: I0416 23:46:52.605836 2803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 23:46:52.614183 kubelet[2803]: E0416 23:46:52.613260 2803 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:46:52.628821 kubelet[2803]: I0416 23:46:52.628783 2803 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:52.630109 kubelet[2803]: I0416 23:46:52.629832 2803 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:52.630109 kubelet[2803]: I0416 23:46:52.629431 2803 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:52.646326 kubelet[2803]: I0416 23:46:52.645908 2803 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Apr 16 23:46:52.646973 kubelet[2803]: I0416 23:46:52.646554 2803 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Apr 16 23:46:52.649051 kubelet[2803]: I0416 23:46:52.648901 2803 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Apr 16 23:46:52.720841 kubelet[2803]: I0416 23:46:52.720624 2803 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:52.732490 kubelet[2803]: I0416 23:46:52.732452 2803 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:52.732690 kubelet[2803]: I0416 23:46:52.732544 2803 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:52.773204 kubelet[2803]: I0416 23:46:52.772968 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e81ddcc29cb53559a19fdd83e9dfdcf4-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"e81ddcc29cb53559a19fdd83e9dfdcf4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:52.773204 kubelet[2803]: I0416 23:46:52.773021 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ef27acac9267c78d7d0b6c509dba1ac2-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"ef27acac9267c78d7d0b6c509dba1ac2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:52.773204 kubelet[2803]: I0416 23:46:52.773053 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef27acac9267c78d7d0b6c509dba1ac2-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"ef27acac9267c78d7d0b6c509dba1ac2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:52.774482 kubelet[2803]: I0416 23:46:52.773089 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef27acac9267c78d7d0b6c509dba1ac2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"ef27acac9267c78d7d0b6c509dba1ac2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:52.774482 kubelet[2803]: I0416 23:46:52.773733 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8db5cf8045feff62d8efdb63a1260be-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"f8db5cf8045feff62d8efdb63a1260be\") " pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:52.774482 kubelet[2803]: I0416 23:46:52.773767 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e81ddcc29cb53559a19fdd83e9dfdcf4-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"e81ddcc29cb53559a19fdd83e9dfdcf4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:52.774482 kubelet[2803]: I0416 23:46:52.774386 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e81ddcc29cb53559a19fdd83e9dfdcf4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"e81ddcc29cb53559a19fdd83e9dfdcf4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:52.774731 kubelet[2803]: I0416 23:46:52.774442 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef27acac9267c78d7d0b6c509dba1ac2-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"ef27acac9267c78d7d0b6c509dba1ac2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:52.774731 kubelet[2803]: I0416 23:46:52.774471 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef27acac9267c78d7d0b6c509dba1ac2-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" (UID: \"ef27acac9267c78d7d0b6c509dba1ac2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:53.407866 kubelet[2803]: I0416 23:46:53.407826 2803 apiserver.go:52] "Watching apiserver" Apr 16 23:46:53.471502 kubelet[2803]: I0416 23:46:53.471448 2803 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 23:46:53.571124 kubelet[2803]: I0416 23:46:53.570509 2803 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:53.573387 kubelet[2803]: I0416 23:46:53.573352 2803 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:53.583126 kubelet[2803]: I0416 23:46:53.582783 2803 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Apr 16 23:46:53.583126 kubelet[2803]: E0416 23:46:53.582856 2803 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:53.588809 kubelet[2803]: I0416 23:46:53.587794 2803 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Apr 16 23:46:53.588809 kubelet[2803]: E0416 23:46:53.587863 2803 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:46:53.636952 kubelet[2803]: I0416 23:46:53.636870 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" podStartSLOduration=1.636846635 podStartE2EDuration="1.636846635s" podCreationTimestamp="2026-04-16 23:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:46:53.619660843 +0000 UTC m=+1.330844884" watchObservedRunningTime="2026-04-16 23:46:53.636846635 +0000 UTC m=+1.348030654" Apr 16 23:46:53.649447 kubelet[2803]: I0416 23:46:53.649061 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" podStartSLOduration=1.6490403310000001 podStartE2EDuration="1.649040331s" podCreationTimestamp="2026-04-16 23:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:46:53.637668108 +0000 UTC m=+1.348852141" watchObservedRunningTime="2026-04-16 23:46:53.649040331 +0000 UTC m=+1.360224353" Apr 16 23:46:56.685388 kubelet[2803]: I0416 23:46:56.685312 2803 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 23:46:56.687005 containerd[1563]: time="2026-04-16T23:46:56.686960543Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 23:46:56.687548 kubelet[2803]: I0416 23:46:56.687305 2803 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 23:46:57.748301 kubelet[2803]: I0416 23:46:57.747236 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" podStartSLOduration=5.747214617 podStartE2EDuration="5.747214617s" podCreationTimestamp="2026-04-16 23:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:46:53.65185584 +0000 UTC m=+1.363039863" watchObservedRunningTime="2026-04-16 23:46:57.747214617 +0000 UTC m=+5.458398639" Apr 16 23:46:57.768512 systemd[1]: Created slice kubepods-besteffort-podd4a75143_6417_41bc_b6c0_61ab9260a410.slice - libcontainer container kubepods-besteffort-podd4a75143_6417_41bc_b6c0_61ab9260a410.slice. Apr 16 23:46:57.809488 kubelet[2803]: I0416 23:46:57.809429 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4a75143-6417-41bc-b6c0-61ab9260a410-xtables-lock\") pod \"kube-proxy-46dbh\" (UID: \"d4a75143-6417-41bc-b6c0-61ab9260a410\") " pod="kube-system/kube-proxy-46dbh" Apr 16 23:46:57.809488 kubelet[2803]: I0416 23:46:57.809486 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4a75143-6417-41bc-b6c0-61ab9260a410-lib-modules\") pod \"kube-proxy-46dbh\" (UID: \"d4a75143-6417-41bc-b6c0-61ab9260a410\") " pod="kube-system/kube-proxy-46dbh" Apr 16 23:46:57.809706 kubelet[2803]: I0416 23:46:57.809516 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4a75143-6417-41bc-b6c0-61ab9260a410-kube-proxy\") pod \"kube-proxy-46dbh\" (UID: \"d4a75143-6417-41bc-b6c0-61ab9260a410\") " pod="kube-system/kube-proxy-46dbh" Apr 16 23:46:57.809706 kubelet[2803]: I0416 23:46:57.809538 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2grcd\" (UniqueName: \"kubernetes.io/projected/d4a75143-6417-41bc-b6c0-61ab9260a410-kube-api-access-2grcd\") pod \"kube-proxy-46dbh\" (UID: \"d4a75143-6417-41bc-b6c0-61ab9260a410\") " pod="kube-system/kube-proxy-46dbh" Apr 16 23:46:57.874595 systemd[1]: Created slice kubepods-besteffort-pod307c117f_a4bf_4e66_b769_38357c8890f5.slice - libcontainer container kubepods-besteffort-pod307c117f_a4bf_4e66_b769_38357c8890f5.slice. Apr 16 23:46:57.910301 kubelet[2803]: I0416 23:46:57.910227 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/307c117f-a4bf-4e66-b769-38357c8890f5-var-lib-calico\") pod \"tigera-operator-5588576f44-jww46\" (UID: \"307c117f-a4bf-4e66-b769-38357c8890f5\") " pod="tigera-operator/tigera-operator-5588576f44-jww46" Apr 16 23:46:57.910301 kubelet[2803]: I0416 23:46:57.910276 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m254z\" (UniqueName: \"kubernetes.io/projected/307c117f-a4bf-4e66-b769-38357c8890f5-kube-api-access-m254z\") pod \"tigera-operator-5588576f44-jww46\" (UID: \"307c117f-a4bf-4e66-b769-38357c8890f5\") " pod="tigera-operator/tigera-operator-5588576f44-jww46" Apr 16 23:46:58.083751 containerd[1563]: time="2026-04-16T23:46:58.083137985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-46dbh,Uid:d4a75143-6417-41bc-b6c0-61ab9260a410,Namespace:kube-system,Attempt:0,}" Apr 16 23:46:58.110514 containerd[1563]: time="2026-04-16T23:46:58.110411844Z" level=info msg="connecting to shim 9edb6b660e64a5c822a751fdd6d92aa3d31f4e9aa96ed8c2f98bd40a137ed1e0" address="unix:///run/containerd/s/40988f9ed81f49d4c7a44ff509dc301809796ff3ff7f8728fc0d5e684fb5975a" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:46:58.146277 systemd[1]: Started cri-containerd-9edb6b660e64a5c822a751fdd6d92aa3d31f4e9aa96ed8c2f98bd40a137ed1e0.scope - libcontainer container 9edb6b660e64a5c822a751fdd6d92aa3d31f4e9aa96ed8c2f98bd40a137ed1e0. Apr 16 23:46:58.185777 containerd[1563]: time="2026-04-16T23:46:58.185327595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-jww46,Uid:307c117f-a4bf-4e66-b769-38357c8890f5,Namespace:tigera-operator,Attempt:0,}" Apr 16 23:46:58.187781 containerd[1563]: time="2026-04-16T23:46:58.187735416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-46dbh,Uid:d4a75143-6417-41bc-b6c0-61ab9260a410,Namespace:kube-system,Attempt:0,} returns sandbox id \"9edb6b660e64a5c822a751fdd6d92aa3d31f4e9aa96ed8c2f98bd40a137ed1e0\"" Apr 16 23:46:58.196335 containerd[1563]: time="2026-04-16T23:46:58.196240570Z" level=info msg="CreateContainer within sandbox \"9edb6b660e64a5c822a751fdd6d92aa3d31f4e9aa96ed8c2f98bd40a137ed1e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 23:46:58.217120 containerd[1563]: time="2026-04-16T23:46:58.216693669Z" level=info msg="Container 8c9ad3a7d669bd3ff2951bf3045759c1c3ff670e16c4062cf11edcb0117dafa6: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:46:58.230625 containerd[1563]: time="2026-04-16T23:46:58.230576755Z" level=info msg="CreateContainer within sandbox \"9edb6b660e64a5c822a751fdd6d92aa3d31f4e9aa96ed8c2f98bd40a137ed1e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8c9ad3a7d669bd3ff2951bf3045759c1c3ff670e16c4062cf11edcb0117dafa6\"" Apr 16 23:46:58.233627 containerd[1563]: time="2026-04-16T23:46:58.233585598Z" level=info msg="StartContainer for \"8c9ad3a7d669bd3ff2951bf3045759c1c3ff670e16c4062cf11edcb0117dafa6\"" Apr 16 23:46:58.237286 containerd[1563]: time="2026-04-16T23:46:58.237248422Z" level=info msg="connecting to shim cd8cd6fec55609f463c66320d659c6f44da512e7f1833920fcf53d0f94118714" address="unix:///run/containerd/s/253530db32cd1d67df05d46c2ebdedb086f43248a6b241673236015edd6a857b" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:46:58.240554 containerd[1563]: time="2026-04-16T23:46:58.240515245Z" level=info msg="connecting to shim 8c9ad3a7d669bd3ff2951bf3045759c1c3ff670e16c4062cf11edcb0117dafa6" address="unix:///run/containerd/s/40988f9ed81f49d4c7a44ff509dc301809796ff3ff7f8728fc0d5e684fb5975a" protocol=ttrpc version=3 Apr 16 23:46:58.268540 systemd[1]: Started cri-containerd-cd8cd6fec55609f463c66320d659c6f44da512e7f1833920fcf53d0f94118714.scope - libcontainer container cd8cd6fec55609f463c66320d659c6f44da512e7f1833920fcf53d0f94118714. Apr 16 23:46:58.276208 systemd[1]: Started cri-containerd-8c9ad3a7d669bd3ff2951bf3045759c1c3ff670e16c4062cf11edcb0117dafa6.scope - libcontainer container 8c9ad3a7d669bd3ff2951bf3045759c1c3ff670e16c4062cf11edcb0117dafa6. Apr 16 23:46:58.365901 containerd[1563]: time="2026-04-16T23:46:58.365550257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-jww46,Uid:307c117f-a4bf-4e66-b769-38357c8890f5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cd8cd6fec55609f463c66320d659c6f44da512e7f1833920fcf53d0f94118714\"" Apr 16 23:46:58.375839 containerd[1563]: time="2026-04-16T23:46:58.375737414Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 23:46:58.396757 containerd[1563]: time="2026-04-16T23:46:58.396465726Z" level=info msg="StartContainer for \"8c9ad3a7d669bd3ff2951bf3045759c1c3ff670e16c4062cf11edcb0117dafa6\" returns successfully" Apr 16 23:46:58.601535 kubelet[2803]: I0416 23:46:58.601454 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-46dbh" podStartSLOduration=1.6014325619999998 podStartE2EDuration="1.601432562s" podCreationTimestamp="2026-04-16 23:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:46:58.601349215 +0000 UTC m=+6.312533239" watchObservedRunningTime="2026-04-16 23:46:58.601432562 +0000 UTC m=+6.312616585" Apr 16 23:46:59.283533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3011844546.mount: Deactivated successfully. Apr 16 23:47:00.937962 containerd[1563]: time="2026-04-16T23:47:00.937906927Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:00.939285 containerd[1563]: time="2026-04-16T23:47:00.939012588Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 16 23:47:00.940257 containerd[1563]: time="2026-04-16T23:47:00.940189548Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:00.944361 containerd[1563]: time="2026-04-16T23:47:00.944265488Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:00.945301 containerd[1563]: time="2026-04-16T23:47:00.945112677Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.569290067s" Apr 16 23:47:00.945301 containerd[1563]: time="2026-04-16T23:47:00.945169170Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 16 23:47:00.950985 containerd[1563]: time="2026-04-16T23:47:00.950945047Z" level=info msg="CreateContainer within sandbox \"cd8cd6fec55609f463c66320d659c6f44da512e7f1833920fcf53d0f94118714\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 23:47:00.960415 containerd[1563]: time="2026-04-16T23:47:00.960367316Z" level=info msg="Container 1bf3abf4c1b34e2e4166556e6222cf4788b708eb4c3a3a7f70a585f8146725af: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:00.980142 containerd[1563]: time="2026-04-16T23:47:00.979589832Z" level=info msg="CreateContainer within sandbox \"cd8cd6fec55609f463c66320d659c6f44da512e7f1833920fcf53d0f94118714\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1bf3abf4c1b34e2e4166556e6222cf4788b708eb4c3a3a7f70a585f8146725af\"" Apr 16 23:47:00.980274 containerd[1563]: time="2026-04-16T23:47:00.980229591Z" level=info msg="StartContainer for \"1bf3abf4c1b34e2e4166556e6222cf4788b708eb4c3a3a7f70a585f8146725af\"" Apr 16 23:47:00.981602 containerd[1563]: time="2026-04-16T23:47:00.981519294Z" level=info msg="connecting to shim 1bf3abf4c1b34e2e4166556e6222cf4788b708eb4c3a3a7f70a585f8146725af" address="unix:///run/containerd/s/253530db32cd1d67df05d46c2ebdedb086f43248a6b241673236015edd6a857b" protocol=ttrpc version=3 Apr 16 23:47:01.029345 systemd[1]: Started cri-containerd-1bf3abf4c1b34e2e4166556e6222cf4788b708eb4c3a3a7f70a585f8146725af.scope - libcontainer container 1bf3abf4c1b34e2e4166556e6222cf4788b708eb4c3a3a7f70a585f8146725af. Apr 16 23:47:01.076854 containerd[1563]: time="2026-04-16T23:47:01.076778607Z" level=info msg="StartContainer for \"1bf3abf4c1b34e2e4166556e6222cf4788b708eb4c3a3a7f70a585f8146725af\" returns successfully" Apr 16 23:47:01.646908 kubelet[2803]: I0416 23:47:01.646821 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-jww46" podStartSLOduration=2.0706868370000002 podStartE2EDuration="4.646800396s" podCreationTimestamp="2026-04-16 23:46:57 +0000 UTC" firstStartedPulling="2026-04-16 23:46:58.370350273 +0000 UTC m=+6.081534285" lastFinishedPulling="2026-04-16 23:47:00.946463826 +0000 UTC m=+8.657647844" observedRunningTime="2026-04-16 23:47:01.646530393 +0000 UTC m=+9.357714415" watchObservedRunningTime="2026-04-16 23:47:01.646800396 +0000 UTC m=+9.357984417" Apr 16 23:47:04.048989 update_engine[1538]: I20260416 23:47:04.048163 1538 update_attempter.cc:509] Updating boot flags... Apr 16 23:47:08.685918 sudo[1850]: pam_unix(sudo:session): session closed for user root Apr 16 23:47:08.799186 sshd[1849]: Connection closed by 50.85.169.122 port 56430 Apr 16 23:47:08.800412 sshd-session[1846]: pam_unix(sshd:session): session closed for user core Apr 16 23:47:08.812594 systemd[1]: sshd@6-10.128.0.50:22-50.85.169.122:56430.service: Deactivated successfully. Apr 16 23:47:08.817011 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 23:47:08.817899 systemd[1]: session-7.scope: Consumed 7.310s CPU time, 230M memory peak. Apr 16 23:47:08.821849 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. Apr 16 23:47:08.825532 systemd-logind[1534]: Removed session 7. Apr 16 23:47:10.299088 systemd[1]: Created slice kubepods-besteffort-podcf6ad601_b821_40d8_8a02_d0518306fb38.slice - libcontainer container kubepods-besteffort-podcf6ad601_b821_40d8_8a02_d0518306fb38.slice. Apr 16 23:47:10.403933 kubelet[2803]: I0416 23:47:10.403791 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cf6ad601-b821-40d8-8a02-d0518306fb38-typha-certs\") pod \"calico-typha-7ddfc6cbdf-2xm84\" (UID: \"cf6ad601-b821-40d8-8a02-d0518306fb38\") " pod="calico-system/calico-typha-7ddfc6cbdf-2xm84" Apr 16 23:47:10.403933 kubelet[2803]: I0416 23:47:10.403850 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k8lv\" (UniqueName: \"kubernetes.io/projected/cf6ad601-b821-40d8-8a02-d0518306fb38-kube-api-access-8k8lv\") pod \"calico-typha-7ddfc6cbdf-2xm84\" (UID: \"cf6ad601-b821-40d8-8a02-d0518306fb38\") " pod="calico-system/calico-typha-7ddfc6cbdf-2xm84" Apr 16 23:47:10.403933 kubelet[2803]: I0416 23:47:10.403886 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf6ad601-b821-40d8-8a02-d0518306fb38-tigera-ca-bundle\") pod \"calico-typha-7ddfc6cbdf-2xm84\" (UID: \"cf6ad601-b821-40d8-8a02-d0518306fb38\") " pod="calico-system/calico-typha-7ddfc6cbdf-2xm84" Apr 16 23:47:10.451979 systemd[1]: Created slice kubepods-besteffort-pod4b08be7a_a85d_4047_aeb0_c216ac1b6b27.slice - libcontainer container kubepods-besteffort-pod4b08be7a_a85d_4047_aeb0_c216ac1b6b27.slice. Apr 16 23:47:10.504172 kubelet[2803]: I0416 23:47:10.504052 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/4b08be7a-a85d-4047-aeb0-c216ac1b6b27-sys-fs\") pod \"calico-node-sdxzd\" (UID: \"4b08be7a-a85d-4047-aeb0-c216ac1b6b27\") " pod="calico-system/calico-node-sdxzd" Apr 16 23:47:10.504609 kubelet[2803]: I0416 23:47:10.504570 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4b08be7a-a85d-4047-aeb0-c216ac1b6b27-cni-log-dir\") pod \"calico-node-sdxzd\" (UID: \"4b08be7a-a85d-4047-aeb0-c216ac1b6b27\") " pod="calico-system/calico-node-sdxzd" Apr 16 23:47:10.505584 kubelet[2803]: I0416 23:47:10.504619 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b08be7a-a85d-4047-aeb0-c216ac1b6b27-tigera-ca-bundle\") pod \"calico-node-sdxzd\" (UID: \"4b08be7a-a85d-4047-aeb0-c216ac1b6b27\") " pod="calico-system/calico-node-sdxzd" Apr 16 23:47:10.505584 kubelet[2803]: I0416 23:47:10.504655 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4b08be7a-a85d-4047-aeb0-c216ac1b6b27-var-run-calico\") pod \"calico-node-sdxzd\" (UID: \"4b08be7a-a85d-4047-aeb0-c216ac1b6b27\") " pod="calico-system/calico-node-sdxzd" Apr 16 23:47:10.505584 kubelet[2803]: I0416 23:47:10.504681 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b08be7a-a85d-4047-aeb0-c216ac1b6b27-lib-modules\") pod \"calico-node-sdxzd\" (UID: \"4b08be7a-a85d-4047-aeb0-c216ac1b6b27\") " pod="calico-system/calico-node-sdxzd" Apr 16 23:47:10.505584 kubelet[2803]: I0416 23:47:10.504712 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4b08be7a-a85d-4047-aeb0-c216ac1b6b27-var-lib-calico\") pod \"calico-node-sdxzd\" (UID: \"4b08be7a-a85d-4047-aeb0-c216ac1b6b27\") " pod="calico-system/calico-node-sdxzd" Apr 16 23:47:10.505584 kubelet[2803]: I0416 23:47:10.504746 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/4b08be7a-a85d-4047-aeb0-c216ac1b6b27-nodeproc\") pod \"calico-node-sdxzd\" (UID: \"4b08be7a-a85d-4047-aeb0-c216ac1b6b27\") " pod="calico-system/calico-node-sdxzd" Apr 16 23:47:10.505911 kubelet[2803]: I0416 23:47:10.504783 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4b08be7a-a85d-4047-aeb0-c216ac1b6b27-policysync\") pod \"calico-node-sdxzd\" (UID: \"4b08be7a-a85d-4047-aeb0-c216ac1b6b27\") " pod="calico-system/calico-node-sdxzd" Apr 16 23:47:10.505911 kubelet[2803]: I0416 23:47:10.504887 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/4b08be7a-a85d-4047-aeb0-c216ac1b6b27-bpffs\") pod \"calico-node-sdxzd\" (UID: \"4b08be7a-a85d-4047-aeb0-c216ac1b6b27\") " pod="calico-system/calico-node-sdxzd" Apr 16 23:47:10.507032 kubelet[2803]: I0416 23:47:10.506752 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4b08be7a-a85d-4047-aeb0-c216ac1b6b27-cni-net-dir\") pod \"calico-node-sdxzd\" (UID: \"4b08be7a-a85d-4047-aeb0-c216ac1b6b27\") " pod="calico-system/calico-node-sdxzd" Apr 16 23:47:10.507369 kubelet[2803]: I0416 23:47:10.507069 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4b08be7a-a85d-4047-aeb0-c216ac1b6b27-node-certs\") pod \"calico-node-sdxzd\" (UID: \"4b08be7a-a85d-4047-aeb0-c216ac1b6b27\") " pod="calico-system/calico-node-sdxzd" Apr 16 23:47:10.508193 kubelet[2803]: I0416 23:47:10.507875 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b08be7a-a85d-4047-aeb0-c216ac1b6b27-xtables-lock\") pod \"calico-node-sdxzd\" (UID: \"4b08be7a-a85d-4047-aeb0-c216ac1b6b27\") " pod="calico-system/calico-node-sdxzd" Apr 16 23:47:10.508193 kubelet[2803]: I0416 23:47:10.507974 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttcpd\" (UniqueName: \"kubernetes.io/projected/4b08be7a-a85d-4047-aeb0-c216ac1b6b27-kube-api-access-ttcpd\") pod \"calico-node-sdxzd\" (UID: \"4b08be7a-a85d-4047-aeb0-c216ac1b6b27\") " pod="calico-system/calico-node-sdxzd" Apr 16 23:47:10.508193 kubelet[2803]: I0416 23:47:10.508052 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4b08be7a-a85d-4047-aeb0-c216ac1b6b27-cni-bin-dir\") pod \"calico-node-sdxzd\" (UID: \"4b08be7a-a85d-4047-aeb0-c216ac1b6b27\") " pod="calico-system/calico-node-sdxzd" Apr 16 23:47:10.509395 kubelet[2803]: I0416 23:47:10.508078 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4b08be7a-a85d-4047-aeb0-c216ac1b6b27-flexvol-driver-host\") pod \"calico-node-sdxzd\" (UID: \"4b08be7a-a85d-4047-aeb0-c216ac1b6b27\") " pod="calico-system/calico-node-sdxzd" Apr 16 23:47:10.568606 kubelet[2803]: E0416 23:47:10.568456 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z22dv" podUID="45c1d843-c2b2-409a-bd42-84f1b45c185c" Apr 16 23:47:10.609711 kubelet[2803]: I0416 23:47:10.609660 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/45c1d843-c2b2-409a-bd42-84f1b45c185c-socket-dir\") pod \"csi-node-driver-z22dv\" (UID: \"45c1d843-c2b2-409a-bd42-84f1b45c185c\") " pod="calico-system/csi-node-driver-z22dv" Apr 16 23:47:10.609909 kubelet[2803]: I0416 23:47:10.609824 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45c1d843-c2b2-409a-bd42-84f1b45c185c-kubelet-dir\") pod \"csi-node-driver-z22dv\" (UID: \"45c1d843-c2b2-409a-bd42-84f1b45c185c\") " pod="calico-system/csi-node-driver-z22dv" Apr 16 23:47:10.609909 kubelet[2803]: I0416 23:47:10.609853 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/45c1d843-c2b2-409a-bd42-84f1b45c185c-registration-dir\") pod \"csi-node-driver-z22dv\" (UID: \"45c1d843-c2b2-409a-bd42-84f1b45c185c\") " pod="calico-system/csi-node-driver-z22dv" Apr 16 23:47:10.609909 kubelet[2803]: I0416 23:47:10.609876 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/45c1d843-c2b2-409a-bd42-84f1b45c185c-varrun\") pod \"csi-node-driver-z22dv\" (UID: \"45c1d843-c2b2-409a-bd42-84f1b45c185c\") " pod="calico-system/csi-node-driver-z22dv" Apr 16 23:47:10.610070 kubelet[2803]: I0416 23:47:10.609919 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8xvg\" (UniqueName: \"kubernetes.io/projected/45c1d843-c2b2-409a-bd42-84f1b45c185c-kube-api-access-p8xvg\") pod \"csi-node-driver-z22dv\" (UID: \"45c1d843-c2b2-409a-bd42-84f1b45c185c\") " pod="calico-system/csi-node-driver-z22dv" Apr 16 23:47:10.615651 kubelet[2803]: E0416 23:47:10.615142 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.615651 kubelet[2803]: W0416 23:47:10.615171 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.615651 kubelet[2803]: E0416 23:47:10.615198 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.619208 kubelet[2803]: E0416 23:47:10.619183 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.622120 containerd[1563]: time="2026-04-16T23:47:10.619620701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ddfc6cbdf-2xm84,Uid:cf6ad601-b821-40d8-8a02-d0518306fb38,Namespace:calico-system,Attempt:0,}" Apr 16 23:47:10.622633 kubelet[2803]: W0416 23:47:10.620138 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.622633 kubelet[2803]: E0416 23:47:10.620175 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.622633 kubelet[2803]: E0416 23:47:10.620613 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.622633 kubelet[2803]: W0416 23:47:10.620630 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.622633 kubelet[2803]: E0416 23:47:10.620683 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.627121 kubelet[2803]: E0416 23:47:10.624606 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.627121 kubelet[2803]: W0416 23:47:10.624628 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.627121 kubelet[2803]: E0416 23:47:10.624769 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.627121 kubelet[2803]: E0416 23:47:10.626611 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.627121 kubelet[2803]: W0416 23:47:10.626628 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.627121 kubelet[2803]: E0416 23:47:10.626945 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.632181 kubelet[2803]: E0416 23:47:10.630233 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.632181 kubelet[2803]: W0416 23:47:10.630383 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.632181 kubelet[2803]: E0416 23:47:10.630406 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.637482 kubelet[2803]: E0416 23:47:10.637455 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.638158 kubelet[2803]: W0416 23:47:10.637479 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.638262 kubelet[2803]: E0416 23:47:10.638168 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.639398 kubelet[2803]: E0416 23:47:10.639370 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.639398 kubelet[2803]: W0416 23:47:10.639394 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.639574 kubelet[2803]: E0416 23:47:10.639415 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.641586 kubelet[2803]: E0416 23:47:10.641558 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.641872 kubelet[2803]: W0416 23:47:10.641583 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.643136 kubelet[2803]: E0416 23:47:10.642277 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.645798 kubelet[2803]: E0416 23:47:10.645769 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.645906 kubelet[2803]: W0416 23:47:10.645824 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.645906 kubelet[2803]: E0416 23:47:10.645845 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.646448 kubelet[2803]: E0416 23:47:10.646421 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.646448 kubelet[2803]: W0416 23:47:10.646444 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.646611 kubelet[2803]: E0416 23:47:10.646463 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.647741 kubelet[2803]: E0416 23:47:10.647278 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.647741 kubelet[2803]: W0416 23:47:10.647589 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.647741 kubelet[2803]: E0416 23:47:10.647616 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.649076 kubelet[2803]: E0416 23:47:10.649049 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.650257 kubelet[2803]: W0416 23:47:10.650232 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.650372 kubelet[2803]: E0416 23:47:10.650354 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.650943 kubelet[2803]: E0416 23:47:10.650887 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.650943 kubelet[2803]: W0416 23:47:10.650906 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.650943 kubelet[2803]: E0416 23:47:10.650923 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.652024 kubelet[2803]: E0416 23:47:10.651550 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.652024 kubelet[2803]: W0416 23:47:10.651903 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.654342 kubelet[2803]: E0416 23:47:10.652199 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.654667 kubelet[2803]: E0416 23:47:10.654648 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.654832 kubelet[2803]: W0416 23:47:10.654773 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.654832 kubelet[2803]: E0416 23:47:10.654798 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.693247 kubelet[2803]: E0416 23:47:10.688264 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.693247 kubelet[2803]: W0416 23:47:10.692159 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.693247 kubelet[2803]: E0416 23:47:10.692188 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.696724 containerd[1563]: time="2026-04-16T23:47:10.696673647Z" level=info msg="connecting to shim b277dc82a5d7ffbbbcdb278539e962a45e063f02f8063c3251a5ac4da8cb1f08" address="unix:///run/containerd/s/e9f4bec1d2e1db89c16df2065f1ebf8cab3bc7b8ed010a088fdd5efff2aa42b3" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:47:10.711747 kubelet[2803]: E0416 23:47:10.711573 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.711747 kubelet[2803]: W0416 23:47:10.711605 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.711747 kubelet[2803]: E0416 23:47:10.711630 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.713553 kubelet[2803]: E0416 23:47:10.713383 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.713553 kubelet[2803]: W0416 23:47:10.713405 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.713553 kubelet[2803]: E0416 23:47:10.713426 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.716539 kubelet[2803]: E0416 23:47:10.716441 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.716539 kubelet[2803]: W0416 23:47:10.716464 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.716539 kubelet[2803]: E0416 23:47:10.716504 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.718782 kubelet[2803]: E0416 23:47:10.717154 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.718782 kubelet[2803]: W0416 23:47:10.717176 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.718782 kubelet[2803]: E0416 23:47:10.717195 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.718782 kubelet[2803]: E0416 23:47:10.718398 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.718782 kubelet[2803]: W0416 23:47:10.718414 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.718782 kubelet[2803]: E0416 23:47:10.718432 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.718782 kubelet[2803]: E0416 23:47:10.718774 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.718782 kubelet[2803]: W0416 23:47:10.718790 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.719567 kubelet[2803]: E0416 23:47:10.718806 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.720340 kubelet[2803]: E0416 23:47:10.720308 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.720340 kubelet[2803]: W0416 23:47:10.720331 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.720483 kubelet[2803]: E0416 23:47:10.720348 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.720709 kubelet[2803]: E0416 23:47:10.720679 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.720709 kubelet[2803]: W0416 23:47:10.720699 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.720946 kubelet[2803]: E0416 23:47:10.720734 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.722258 kubelet[2803]: E0416 23:47:10.722186 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.722258 kubelet[2803]: W0416 23:47:10.722211 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.722258 kubelet[2803]: E0416 23:47:10.722228 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.724136 kubelet[2803]: E0416 23:47:10.722558 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.724136 kubelet[2803]: W0416 23:47:10.722575 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.724136 kubelet[2803]: E0416 23:47:10.722591 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.724405 kubelet[2803]: E0416 23:47:10.724381 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.724405 kubelet[2803]: W0416 23:47:10.724406 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.724663 kubelet[2803]: E0416 23:47:10.724424 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.724841 kubelet[2803]: E0416 23:47:10.724819 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.724841 kubelet[2803]: W0416 23:47:10.724840 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.724962 kubelet[2803]: E0416 23:47:10.724856 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.727611 kubelet[2803]: E0416 23:47:10.727184 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.727611 kubelet[2803]: W0416 23:47:10.727205 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.727611 kubelet[2803]: E0416 23:47:10.727223 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.727820 kubelet[2803]: E0416 23:47:10.727799 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.727889 kubelet[2803]: W0416 23:47:10.727820 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.727889 kubelet[2803]: E0416 23:47:10.727837 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.728960 kubelet[2803]: E0416 23:47:10.728696 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.728960 kubelet[2803]: W0416 23:47:10.728722 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.728960 kubelet[2803]: E0416 23:47:10.728739 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.730809 kubelet[2803]: E0416 23:47:10.730181 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.730809 kubelet[2803]: W0416 23:47:10.730197 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.730809 kubelet[2803]: E0416 23:47:10.730213 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.730972 kubelet[2803]: E0416 23:47:10.730825 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.730972 kubelet[2803]: W0416 23:47:10.730841 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.731074 kubelet[2803]: E0416 23:47:10.730985 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.733244 kubelet[2803]: E0416 23:47:10.733219 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.733244 kubelet[2803]: W0416 23:47:10.733242 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.733406 kubelet[2803]: E0416 23:47:10.733260 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.733882 kubelet[2803]: E0416 23:47:10.733854 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.733985 kubelet[2803]: W0416 23:47:10.733902 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.733985 kubelet[2803]: E0416 23:47:10.733933 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.735463 kubelet[2803]: E0416 23:47:10.735436 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.735463 kubelet[2803]: W0416 23:47:10.735459 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.735630 kubelet[2803]: E0416 23:47:10.735475 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.737194 kubelet[2803]: E0416 23:47:10.737156 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.737475 kubelet[2803]: W0416 23:47:10.737229 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.737475 kubelet[2803]: E0416 23:47:10.737251 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.737968 kubelet[2803]: E0416 23:47:10.737897 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.737968 kubelet[2803]: W0416 23:47:10.737916 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.737968 kubelet[2803]: E0416 23:47:10.737935 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.739568 kubelet[2803]: E0416 23:47:10.739490 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.739568 kubelet[2803]: W0416 23:47:10.739545 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.739568 kubelet[2803]: E0416 23:47:10.739564 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.742365 kubelet[2803]: E0416 23:47:10.742310 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.742365 kubelet[2803]: W0416 23:47:10.742330 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.742365 kubelet[2803]: E0416 23:47:10.742348 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.743510 kubelet[2803]: E0416 23:47:10.743400 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.743934 kubelet[2803]: W0416 23:47:10.743911 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.744014 kubelet[2803]: E0416 23:47:10.743938 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.770282 containerd[1563]: time="2026-04-16T23:47:10.769256863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sdxzd,Uid:4b08be7a-a85d-4047-aeb0-c216ac1b6b27,Namespace:calico-system,Attempt:0,}" Apr 16 23:47:10.775744 kubelet[2803]: E0416 23:47:10.775711 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:47:10.775912 kubelet[2803]: W0416 23:47:10.775747 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:47:10.775912 kubelet[2803]: E0416 23:47:10.775776 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:47:10.798066 systemd[1]: Started cri-containerd-b277dc82a5d7ffbbbcdb278539e962a45e063f02f8063c3251a5ac4da8cb1f08.scope - libcontainer container b277dc82a5d7ffbbbcdb278539e962a45e063f02f8063c3251a5ac4da8cb1f08. Apr 16 23:47:10.825234 containerd[1563]: time="2026-04-16T23:47:10.825068807Z" level=info msg="connecting to shim baa59741aeceab10206bb56f9eb6c6308f5607c717428b2819016785be7e80c5" address="unix:///run/containerd/s/ff3eb44ba33dfda94d4f83a698e0729ff295fe6fdac454246bc580ee93fc3fe5" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:47:10.896311 systemd[1]: Started cri-containerd-baa59741aeceab10206bb56f9eb6c6308f5607c717428b2819016785be7e80c5.scope - libcontainer container baa59741aeceab10206bb56f9eb6c6308f5607c717428b2819016785be7e80c5. Apr 16 23:47:10.957836 containerd[1563]: time="2026-04-16T23:47:10.957778154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sdxzd,Uid:4b08be7a-a85d-4047-aeb0-c216ac1b6b27,Namespace:calico-system,Attempt:0,} returns sandbox id \"baa59741aeceab10206bb56f9eb6c6308f5607c717428b2819016785be7e80c5\"" Apr 16 23:47:10.961409 containerd[1563]: time="2026-04-16T23:47:10.961373071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 23:47:11.018402 containerd[1563]: time="2026-04-16T23:47:11.018262431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ddfc6cbdf-2xm84,Uid:cf6ad601-b821-40d8-8a02-d0518306fb38,Namespace:calico-system,Attempt:0,} returns sandbox id \"b277dc82a5d7ffbbbcdb278539e962a45e063f02f8063c3251a5ac4da8cb1f08\"" Apr 16 23:47:11.881355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2878616538.mount: Deactivated successfully. Apr 16 23:47:12.010416 containerd[1563]: time="2026-04-16T23:47:12.010345987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:12.011630 containerd[1563]: time="2026-04-16T23:47:12.011574171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Apr 16 23:47:12.012936 containerd[1563]: time="2026-04-16T23:47:12.012870695Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:12.015997 containerd[1563]: time="2026-04-16T23:47:12.015938913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:12.017120 containerd[1563]: time="2026-04-16T23:47:12.016828108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.055409355s" Apr 16 23:47:12.017120 containerd[1563]: time="2026-04-16T23:47:12.016873613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 16 23:47:12.019013 containerd[1563]: time="2026-04-16T23:47:12.018975731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 23:47:12.023602 containerd[1563]: time="2026-04-16T23:47:12.023566020Z" level=info msg="CreateContainer within sandbox \"baa59741aeceab10206bb56f9eb6c6308f5607c717428b2819016785be7e80c5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 23:47:12.035006 containerd[1563]: time="2026-04-16T23:47:12.033665275Z" level=info msg="Container 5d766f055cacee8ddaa671586295ceefb2a2b82dd74bd33dceb67ad2a9cdb194: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:12.056532 containerd[1563]: time="2026-04-16T23:47:12.056487017Z" level=info msg="CreateContainer within sandbox \"baa59741aeceab10206bb56f9eb6c6308f5607c717428b2819016785be7e80c5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5d766f055cacee8ddaa671586295ceefb2a2b82dd74bd33dceb67ad2a9cdb194\"" Apr 16 23:47:12.057480 containerd[1563]: time="2026-04-16T23:47:12.057362406Z" level=info msg="StartContainer for \"5d766f055cacee8ddaa671586295ceefb2a2b82dd74bd33dceb67ad2a9cdb194\"" Apr 16 23:47:12.059711 containerd[1563]: time="2026-04-16T23:47:12.059672218Z" level=info msg="connecting to shim 5d766f055cacee8ddaa671586295ceefb2a2b82dd74bd33dceb67ad2a9cdb194" address="unix:///run/containerd/s/ff3eb44ba33dfda94d4f83a698e0729ff295fe6fdac454246bc580ee93fc3fe5" protocol=ttrpc version=3 Apr 16 23:47:12.087434 systemd[1]: Started cri-containerd-5d766f055cacee8ddaa671586295ceefb2a2b82dd74bd33dceb67ad2a9cdb194.scope - libcontainer container 5d766f055cacee8ddaa671586295ceefb2a2b82dd74bd33dceb67ad2a9cdb194. Apr 16 23:47:12.171223 containerd[1563]: time="2026-04-16T23:47:12.170046032Z" level=info msg="StartContainer for \"5d766f055cacee8ddaa671586295ceefb2a2b82dd74bd33dceb67ad2a9cdb194\" returns successfully" Apr 16 23:47:12.183477 systemd[1]: cri-containerd-5d766f055cacee8ddaa671586295ceefb2a2b82dd74bd33dceb67ad2a9cdb194.scope: Deactivated successfully. Apr 16 23:47:12.187858 containerd[1563]: time="2026-04-16T23:47:12.187804845Z" level=info msg="received container exit event container_id:\"5d766f055cacee8ddaa671586295ceefb2a2b82dd74bd33dceb67ad2a9cdb194\" id:\"5d766f055cacee8ddaa671586295ceefb2a2b82dd74bd33dceb67ad2a9cdb194\" pid:3387 exited_at:{seconds:1776383232 nanos:187385715}" Apr 16 23:47:12.530185 kubelet[2803]: E0416 23:47:12.528855 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z22dv" podUID="45c1d843-c2b2-409a-bd42-84f1b45c185c" Apr 16 23:47:14.528894 kubelet[2803]: E0416 23:47:14.528829 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z22dv" podUID="45c1d843-c2b2-409a-bd42-84f1b45c185c" Apr 16 23:47:14.785507 containerd[1563]: time="2026-04-16T23:47:14.785353804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:14.787160 containerd[1563]: time="2026-04-16T23:47:14.787088661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Apr 16 23:47:14.788664 containerd[1563]: time="2026-04-16T23:47:14.788598152Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:14.791931 containerd[1563]: time="2026-04-16T23:47:14.791868717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:14.793022 containerd[1563]: time="2026-04-16T23:47:14.792951089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.773927398s" Apr 16 23:47:14.793022 containerd[1563]: time="2026-04-16T23:47:14.792996892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 16 23:47:14.795169 containerd[1563]: time="2026-04-16T23:47:14.795129882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 23:47:14.820624 containerd[1563]: time="2026-04-16T23:47:14.820574928Z" level=info msg="CreateContainer within sandbox \"b277dc82a5d7ffbbbcdb278539e962a45e063f02f8063c3251a5ac4da8cb1f08\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 23:47:14.830946 containerd[1563]: time="2026-04-16T23:47:14.829679682Z" level=info msg="Container f5f4333ee9c4918ac2182a60bc09da730a401db6a1903486c0f515bdc6ec04a8: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:14.850557 containerd[1563]: time="2026-04-16T23:47:14.850492803Z" level=info msg="CreateContainer within sandbox \"b277dc82a5d7ffbbbcdb278539e962a45e063f02f8063c3251a5ac4da8cb1f08\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f5f4333ee9c4918ac2182a60bc09da730a401db6a1903486c0f515bdc6ec04a8\"" Apr 16 23:47:14.852810 containerd[1563]: time="2026-04-16T23:47:14.851191664Z" level=info msg="StartContainer for \"f5f4333ee9c4918ac2182a60bc09da730a401db6a1903486c0f515bdc6ec04a8\"" Apr 16 23:47:14.853426 containerd[1563]: time="2026-04-16T23:47:14.853377515Z" level=info msg="connecting to shim f5f4333ee9c4918ac2182a60bc09da730a401db6a1903486c0f515bdc6ec04a8" address="unix:///run/containerd/s/e9f4bec1d2e1db89c16df2065f1ebf8cab3bc7b8ed010a088fdd5efff2aa42b3" protocol=ttrpc version=3 Apr 16 23:47:14.888317 systemd[1]: Started cri-containerd-f5f4333ee9c4918ac2182a60bc09da730a401db6a1903486c0f515bdc6ec04a8.scope - libcontainer container f5f4333ee9c4918ac2182a60bc09da730a401db6a1903486c0f515bdc6ec04a8. Apr 16 23:47:14.960888 containerd[1563]: time="2026-04-16T23:47:14.960844370Z" level=info msg="StartContainer for \"f5f4333ee9c4918ac2182a60bc09da730a401db6a1903486c0f515bdc6ec04a8\" returns successfully" Apr 16 23:47:16.529349 kubelet[2803]: E0416 23:47:16.527854 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z22dv" podUID="45c1d843-c2b2-409a-bd42-84f1b45c185c" Apr 16 23:47:16.665999 kubelet[2803]: I0416 23:47:16.665932 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:47:18.529181 kubelet[2803]: E0416 23:47:18.528696 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z22dv" podUID="45c1d843-c2b2-409a-bd42-84f1b45c185c" Apr 16 23:47:20.528242 kubelet[2803]: E0416 23:47:20.528185 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z22dv" podUID="45c1d843-c2b2-409a-bd42-84f1b45c185c" Apr 16 23:47:21.657658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3309243707.mount: Deactivated successfully. Apr 16 23:47:21.699519 containerd[1563]: time="2026-04-16T23:47:21.699464871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:21.700574 containerd[1563]: time="2026-04-16T23:47:21.700516689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 16 23:47:21.702085 containerd[1563]: time="2026-04-16T23:47:21.702016260Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:21.706213 containerd[1563]: time="2026-04-16T23:47:21.706132478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:21.708111 containerd[1563]: time="2026-04-16T23:47:21.708055959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.912767455s" Apr 16 23:47:21.708392 containerd[1563]: time="2026-04-16T23:47:21.708247290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 16 23:47:21.714430 containerd[1563]: time="2026-04-16T23:47:21.714392971Z" level=info msg="CreateContainer within sandbox \"baa59741aeceab10206bb56f9eb6c6308f5607c717428b2819016785be7e80c5\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 23:47:21.728548 containerd[1563]: time="2026-04-16T23:47:21.728503208Z" level=info msg="Container 9e52adfb7302b46c54168b2ebad7765113bdbfb5b9cef6e4c9668ce661156580: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:21.739774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1309199302.mount: Deactivated successfully. Apr 16 23:47:21.745929 containerd[1563]: time="2026-04-16T23:47:21.745867405Z" level=info msg="CreateContainer within sandbox \"baa59741aeceab10206bb56f9eb6c6308f5607c717428b2819016785be7e80c5\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"9e52adfb7302b46c54168b2ebad7765113bdbfb5b9cef6e4c9668ce661156580\"" Apr 16 23:47:21.746591 containerd[1563]: time="2026-04-16T23:47:21.746566215Z" level=info msg="StartContainer for \"9e52adfb7302b46c54168b2ebad7765113bdbfb5b9cef6e4c9668ce661156580\"" Apr 16 23:47:21.749320 containerd[1563]: time="2026-04-16T23:47:21.749294716Z" level=info msg="connecting to shim 9e52adfb7302b46c54168b2ebad7765113bdbfb5b9cef6e4c9668ce661156580" address="unix:///run/containerd/s/ff3eb44ba33dfda94d4f83a698e0729ff295fe6fdac454246bc580ee93fc3fe5" protocol=ttrpc version=3 Apr 16 23:47:21.782308 systemd[1]: Started cri-containerd-9e52adfb7302b46c54168b2ebad7765113bdbfb5b9cef6e4c9668ce661156580.scope - libcontainer container 9e52adfb7302b46c54168b2ebad7765113bdbfb5b9cef6e4c9668ce661156580. Apr 16 23:47:21.868867 containerd[1563]: time="2026-04-16T23:47:21.868811682Z" level=info msg="StartContainer for \"9e52adfb7302b46c54168b2ebad7765113bdbfb5b9cef6e4c9668ce661156580\" returns successfully" Apr 16 23:47:21.927878 containerd[1563]: time="2026-04-16T23:47:21.927594617Z" level=info msg="received container exit event container_id:\"9e52adfb7302b46c54168b2ebad7765113bdbfb5b9cef6e4c9668ce661156580\" id:\"9e52adfb7302b46c54168b2ebad7765113bdbfb5b9cef6e4c9668ce661156580\" pid:3482 exited_at:{seconds:1776383241 nanos:926951131}" Apr 16 23:47:21.927781 systemd[1]: cri-containerd-9e52adfb7302b46c54168b2ebad7765113bdbfb5b9cef6e4c9668ce661156580.scope: Deactivated successfully. Apr 16 23:47:22.529140 kubelet[2803]: E0416 23:47:22.528164 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z22dv" podUID="45c1d843-c2b2-409a-bd42-84f1b45c185c" Apr 16 23:47:22.655967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e52adfb7302b46c54168b2ebad7765113bdbfb5b9cef6e4c9668ce661156580-rootfs.mount: Deactivated successfully. Apr 16 23:47:22.711848 kubelet[2803]: I0416 23:47:22.711752 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7ddfc6cbdf-2xm84" podStartSLOduration=8.939211565 podStartE2EDuration="12.711729308s" podCreationTimestamp="2026-04-16 23:47:10 +0000 UTC" firstStartedPulling="2026-04-16 23:47:11.022216704 +0000 UTC m=+18.733400717" lastFinishedPulling="2026-04-16 23:47:14.794734439 +0000 UTC m=+22.505918460" observedRunningTime="2026-04-16 23:47:15.710439871 +0000 UTC m=+23.421623893" watchObservedRunningTime="2026-04-16 23:47:22.711729308 +0000 UTC m=+30.422913328" Apr 16 23:47:24.287025 kubelet[2803]: I0416 23:47:24.286815 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:47:24.529371 kubelet[2803]: E0416 23:47:24.527902 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z22dv" podUID="45c1d843-c2b2-409a-bd42-84f1b45c185c" Apr 16 23:47:24.701211 containerd[1563]: time="2026-04-16T23:47:24.700834638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 23:47:26.528563 kubelet[2803]: E0416 23:47:26.528490 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z22dv" podUID="45c1d843-c2b2-409a-bd42-84f1b45c185c" Apr 16 23:47:27.846017 containerd[1563]: time="2026-04-16T23:47:27.845954982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:27.847472 containerd[1563]: time="2026-04-16T23:47:27.847412781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 16 23:47:27.848765 containerd[1563]: time="2026-04-16T23:47:27.848698530Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:27.851757 containerd[1563]: time="2026-04-16T23:47:27.851696421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:27.852922 containerd[1563]: time="2026-04-16T23:47:27.852735039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.150499146s" Apr 16 23:47:27.852922 containerd[1563]: time="2026-04-16T23:47:27.852777484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 16 23:47:27.858324 containerd[1563]: time="2026-04-16T23:47:27.858251582Z" level=info msg="CreateContainer within sandbox \"baa59741aeceab10206bb56f9eb6c6308f5607c717428b2819016785be7e80c5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 23:47:27.869121 containerd[1563]: time="2026-04-16T23:47:27.867912868Z" level=info msg="Container 2fe6633956c95b6a1e9c3bfab4b17008373f2a8314b31ba39002fa92fea33d30: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:27.883564 containerd[1563]: time="2026-04-16T23:47:27.883505069Z" level=info msg="CreateContainer within sandbox \"baa59741aeceab10206bb56f9eb6c6308f5607c717428b2819016785be7e80c5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2fe6633956c95b6a1e9c3bfab4b17008373f2a8314b31ba39002fa92fea33d30\"" Apr 16 23:47:27.884287 containerd[1563]: time="2026-04-16T23:47:27.884250873Z" level=info msg="StartContainer for \"2fe6633956c95b6a1e9c3bfab4b17008373f2a8314b31ba39002fa92fea33d30\"" Apr 16 23:47:27.886696 containerd[1563]: time="2026-04-16T23:47:27.886657281Z" level=info msg="connecting to shim 2fe6633956c95b6a1e9c3bfab4b17008373f2a8314b31ba39002fa92fea33d30" address="unix:///run/containerd/s/ff3eb44ba33dfda94d4f83a698e0729ff295fe6fdac454246bc580ee93fc3fe5" protocol=ttrpc version=3 Apr 16 23:47:27.922321 systemd[1]: Started cri-containerd-2fe6633956c95b6a1e9c3bfab4b17008373f2a8314b31ba39002fa92fea33d30.scope - libcontainer container 2fe6633956c95b6a1e9c3bfab4b17008373f2a8314b31ba39002fa92fea33d30. Apr 16 23:47:28.007990 containerd[1563]: time="2026-04-16T23:47:28.007852103Z" level=info msg="StartContainer for \"2fe6633956c95b6a1e9c3bfab4b17008373f2a8314b31ba39002fa92fea33d30\" returns successfully" Apr 16 23:47:28.529036 kubelet[2803]: E0416 23:47:28.528481 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z22dv" podUID="45c1d843-c2b2-409a-bd42-84f1b45c185c" Apr 16 23:47:29.010048 containerd[1563]: time="2026-04-16T23:47:29.009981534Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:47:29.012752 systemd[1]: cri-containerd-2fe6633956c95b6a1e9c3bfab4b17008373f2a8314b31ba39002fa92fea33d30.scope: Deactivated successfully. Apr 16 23:47:29.013201 systemd[1]: cri-containerd-2fe6633956c95b6a1e9c3bfab4b17008373f2a8314b31ba39002fa92fea33d30.scope: Consumed 640ms CPU time, 198M memory peak, 177M written to disk. Apr 16 23:47:29.015915 containerd[1563]: time="2026-04-16T23:47:29.015773305Z" level=info msg="received container exit event container_id:\"2fe6633956c95b6a1e9c3bfab4b17008373f2a8314b31ba39002fa92fea33d30\" id:\"2fe6633956c95b6a1e9c3bfab4b17008373f2a8314b31ba39002fa92fea33d30\" pid:3545 exited_at:{seconds:1776383249 nanos:15491250}" Apr 16 23:47:29.051441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fe6633956c95b6a1e9c3bfab4b17008373f2a8314b31ba39002fa92fea33d30-rootfs.mount: Deactivated successfully. Apr 16 23:47:29.091233 kubelet[2803]: I0416 23:47:29.091195 2803 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 16 23:47:29.226529 systemd[1]: Created slice kubepods-besteffort-pod1a6fa143_264c_498d_9088_0137213f34c3.slice - libcontainer container kubepods-besteffort-pod1a6fa143_264c_498d_9088_0137213f34c3.slice. Apr 16 23:47:29.305486 kubelet[2803]: I0416 23:47:29.262032 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a6fa143-264c-498d-9088-0137213f34c3-whisker-ca-bundle\") pod \"whisker-6678d7d5d-qxzs2\" (UID: \"1a6fa143-264c-498d-9088-0137213f34c3\") " pod="calico-system/whisker-6678d7d5d-qxzs2" Apr 16 23:47:29.305486 kubelet[2803]: I0416 23:47:29.262121 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1a6fa143-264c-498d-9088-0137213f34c3-whisker-backend-key-pair\") pod \"whisker-6678d7d5d-qxzs2\" (UID: \"1a6fa143-264c-498d-9088-0137213f34c3\") " pod="calico-system/whisker-6678d7d5d-qxzs2" Apr 16 23:47:29.305486 kubelet[2803]: I0416 23:47:29.262152 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qshwx\" (UniqueName: \"kubernetes.io/projected/1a6fa143-264c-498d-9088-0137213f34c3-kube-api-access-qshwx\") pod \"whisker-6678d7d5d-qxzs2\" (UID: \"1a6fa143-264c-498d-9088-0137213f34c3\") " pod="calico-system/whisker-6678d7d5d-qxzs2" Apr 16 23:47:29.305486 kubelet[2803]: I0416 23:47:29.262177 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1a6fa143-264c-498d-9088-0137213f34c3-nginx-config\") pod \"whisker-6678d7d5d-qxzs2\" (UID: \"1a6fa143-264c-498d-9088-0137213f34c3\") " pod="calico-system/whisker-6678d7d5d-qxzs2" Apr 16 23:47:29.347040 systemd[1]: Created slice kubepods-burstable-podae053e49_d4f7_4e65_86ff_52aeb12ce78a.slice - libcontainer container kubepods-burstable-podae053e49_d4f7_4e65_86ff_52aeb12ce78a.slice. Apr 16 23:47:29.463274 kubelet[2803]: I0416 23:47:29.463201 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae053e49-d4f7-4e65-86ff-52aeb12ce78a-config-volume\") pod \"coredns-66bc5c9577-s2djx\" (UID: \"ae053e49-d4f7-4e65-86ff-52aeb12ce78a\") " pod="kube-system/coredns-66bc5c9577-s2djx" Apr 16 23:47:29.463274 kubelet[2803]: I0416 23:47:29.463281 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6vxk\" (UniqueName: \"kubernetes.io/projected/ae053e49-d4f7-4e65-86ff-52aeb12ce78a-kube-api-access-m6vxk\") pod \"coredns-66bc5c9577-s2djx\" (UID: \"ae053e49-d4f7-4e65-86ff-52aeb12ce78a\") " pod="kube-system/coredns-66bc5c9577-s2djx" Apr 16 23:47:29.616235 containerd[1563]: time="2026-04-16T23:47:29.616044868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6678d7d5d-qxzs2,Uid:1a6fa143-264c-498d-9088-0137213f34c3,Namespace:calico-system,Attempt:0,}" Apr 16 23:47:29.650153 systemd[1]: Created slice kubepods-besteffort-pod8945b8a7_e39d_459e_b827_9b525646bee6.slice - libcontainer container kubepods-besteffort-pod8945b8a7_e39d_459e_b827_9b525646bee6.slice. Apr 16 23:47:29.663125 systemd[1]: Created slice kubepods-burstable-pod2700e30b_1b7a_4e9b_9459_dc293eca7042.slice - libcontainer container kubepods-burstable-pod2700e30b_1b7a_4e9b_9459_dc293eca7042.slice. Apr 16 23:47:29.663578 containerd[1563]: time="2026-04-16T23:47:29.663233733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s2djx,Uid:ae053e49-d4f7-4e65-86ff-52aeb12ce78a,Namespace:kube-system,Attempt:0,}" Apr 16 23:47:29.667767 kubelet[2803]: I0416 23:47:29.666594 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvh8x\" (UniqueName: \"kubernetes.io/projected/8945b8a7-e39d-459e-b827-9b525646bee6-kube-api-access-nvh8x\") pod \"calico-kube-controllers-8654597d95-sdzr5\" (UID: \"8945b8a7-e39d-459e-b827-9b525646bee6\") " pod="calico-system/calico-kube-controllers-8654597d95-sdzr5" Apr 16 23:47:29.667767 kubelet[2803]: I0416 23:47:29.666673 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8945b8a7-e39d-459e-b827-9b525646bee6-tigera-ca-bundle\") pod \"calico-kube-controllers-8654597d95-sdzr5\" (UID: \"8945b8a7-e39d-459e-b827-9b525646bee6\") " pod="calico-system/calico-kube-controllers-8654597d95-sdzr5" Apr 16 23:47:29.687075 systemd[1]: Created slice kubepods-besteffort-pod18374daf_2055_4b0b_8c9d_1ca849a297fc.slice - libcontainer container kubepods-besteffort-pod18374daf_2055_4b0b_8c9d_1ca849a297fc.slice. Apr 16 23:47:29.705621 systemd[1]: Created slice kubepods-besteffort-pod1ed38fd9_69a8_4273_aed3_ce32f3fe1e49.slice - libcontainer container kubepods-besteffort-pod1ed38fd9_69a8_4273_aed3_ce32f3fe1e49.slice. Apr 16 23:47:29.726252 systemd[1]: Created slice kubepods-besteffort-pode4023aaa_91d8_4d6e_be91_82fbe65c18a7.slice - libcontainer container kubepods-besteffort-pode4023aaa_91d8_4d6e_be91_82fbe65c18a7.slice. Apr 16 23:47:29.768559 kubelet[2803]: I0416 23:47:29.767026 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4023aaa-91d8-4d6e-be91-82fbe65c18a7-config\") pod \"goldmane-cccfbd5cf-7d5sl\" (UID: \"e4023aaa-91d8-4d6e-be91-82fbe65c18a7\") " pod="calico-system/goldmane-cccfbd5cf-7d5sl" Apr 16 23:47:29.768559 kubelet[2803]: I0416 23:47:29.767074 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2700e30b-1b7a-4e9b-9459-dc293eca7042-config-volume\") pod \"coredns-66bc5c9577-gqpb9\" (UID: \"2700e30b-1b7a-4e9b-9459-dc293eca7042\") " pod="kube-system/coredns-66bc5c9577-gqpb9" Apr 16 23:47:29.768559 kubelet[2803]: I0416 23:47:29.767117 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn58d\" (UniqueName: \"kubernetes.io/projected/2700e30b-1b7a-4e9b-9459-dc293eca7042-kube-api-access-hn58d\") pod \"coredns-66bc5c9577-gqpb9\" (UID: \"2700e30b-1b7a-4e9b-9459-dc293eca7042\") " pod="kube-system/coredns-66bc5c9577-gqpb9" Apr 16 23:47:29.768559 kubelet[2803]: I0416 23:47:29.767147 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqfbs\" (UniqueName: \"kubernetes.io/projected/18374daf-2055-4b0b-8c9d-1ca849a297fc-kube-api-access-sqfbs\") pod \"calico-apiserver-c69794945-s56n5\" (UID: \"18374daf-2055-4b0b-8c9d-1ca849a297fc\") " pod="calico-system/calico-apiserver-c69794945-s56n5" Apr 16 23:47:29.768559 kubelet[2803]: I0416 23:47:29.767175 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e4023aaa-91d8-4d6e-be91-82fbe65c18a7-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-7d5sl\" (UID: \"e4023aaa-91d8-4d6e-be91-82fbe65c18a7\") " pod="calico-system/goldmane-cccfbd5cf-7d5sl" Apr 16 23:47:29.768968 kubelet[2803]: I0416 23:47:29.767201 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2ftz\" (UniqueName: \"kubernetes.io/projected/e4023aaa-91d8-4d6e-be91-82fbe65c18a7-kube-api-access-v2ftz\") pod \"goldmane-cccfbd5cf-7d5sl\" (UID: \"e4023aaa-91d8-4d6e-be91-82fbe65c18a7\") " pod="calico-system/goldmane-cccfbd5cf-7d5sl" Apr 16 23:47:29.768968 kubelet[2803]: I0416 23:47:29.767252 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t4t2\" (UniqueName: \"kubernetes.io/projected/1ed38fd9-69a8-4273-aed3-ce32f3fe1e49-kube-api-access-7t4t2\") pod \"calico-apiserver-c69794945-pkspw\" (UID: \"1ed38fd9-69a8-4273-aed3-ce32f3fe1e49\") " pod="calico-system/calico-apiserver-c69794945-pkspw" Apr 16 23:47:29.768968 kubelet[2803]: I0416 23:47:29.767278 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4023aaa-91d8-4d6e-be91-82fbe65c18a7-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-7d5sl\" (UID: \"e4023aaa-91d8-4d6e-be91-82fbe65c18a7\") " pod="calico-system/goldmane-cccfbd5cf-7d5sl" Apr 16 23:47:29.768968 kubelet[2803]: I0416 23:47:29.767344 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/18374daf-2055-4b0b-8c9d-1ca849a297fc-calico-apiserver-certs\") pod \"calico-apiserver-c69794945-s56n5\" (UID: \"18374daf-2055-4b0b-8c9d-1ca849a297fc\") " pod="calico-system/calico-apiserver-c69794945-s56n5" Apr 16 23:47:29.768968 kubelet[2803]: I0416 23:47:29.767399 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1ed38fd9-69a8-4273-aed3-ce32f3fe1e49-calico-apiserver-certs\") pod \"calico-apiserver-c69794945-pkspw\" (UID: \"1ed38fd9-69a8-4273-aed3-ce32f3fe1e49\") " pod="calico-system/calico-apiserver-c69794945-pkspw" Apr 16 23:47:29.790230 containerd[1563]: time="2026-04-16T23:47:29.790181488Z" level=info msg="CreateContainer within sandbox \"baa59741aeceab10206bb56f9eb6c6308f5607c717428b2819016785be7e80c5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 23:47:29.846475 containerd[1563]: time="2026-04-16T23:47:29.846414744Z" level=info msg="Container ce2cd579035b01041e7fa1f3fcaf6e26bcc5e4473c52f1291e8de9d4809060a6: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:29.891150 containerd[1563]: time="2026-04-16T23:47:29.890715640Z" level=info msg="CreateContainer within sandbox \"baa59741aeceab10206bb56f9eb6c6308f5607c717428b2819016785be7e80c5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ce2cd579035b01041e7fa1f3fcaf6e26bcc5e4473c52f1291e8de9d4809060a6\"" Apr 16 23:47:29.905337 containerd[1563]: time="2026-04-16T23:47:29.905181205Z" level=info msg="StartContainer for \"ce2cd579035b01041e7fa1f3fcaf6e26bcc5e4473c52f1291e8de9d4809060a6\"" Apr 16 23:47:29.919887 containerd[1563]: time="2026-04-16T23:47:29.919839742Z" level=info msg="connecting to shim ce2cd579035b01041e7fa1f3fcaf6e26bcc5e4473c52f1291e8de9d4809060a6" address="unix:///run/containerd/s/ff3eb44ba33dfda94d4f83a698e0729ff295fe6fdac454246bc580ee93fc3fe5" protocol=ttrpc version=3 Apr 16 23:47:29.938364 containerd[1563]: time="2026-04-16T23:47:29.938258842Z" level=error msg="Failed to destroy network for sandbox \"e67ce2b428a9c53865ee165f4a01760be75bbf63cb747bb42433cc71b47308a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:29.941815 containerd[1563]: time="2026-04-16T23:47:29.941720802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6678d7d5d-qxzs2,Uid:1a6fa143-264c-498d-9088-0137213f34c3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e67ce2b428a9c53865ee165f4a01760be75bbf63cb747bb42433cc71b47308a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:29.942646 kubelet[2803]: E0416 23:47:29.942068 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e67ce2b428a9c53865ee165f4a01760be75bbf63cb747bb42433cc71b47308a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:29.942646 kubelet[2803]: E0416 23:47:29.942592 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e67ce2b428a9c53865ee165f4a01760be75bbf63cb747bb42433cc71b47308a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6678d7d5d-qxzs2" Apr 16 23:47:29.943491 kubelet[2803]: E0416 23:47:29.942858 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e67ce2b428a9c53865ee165f4a01760be75bbf63cb747bb42433cc71b47308a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6678d7d5d-qxzs2" Apr 16 23:47:29.944521 kubelet[2803]: E0416 23:47:29.944170 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6678d7d5d-qxzs2_calico-system(1a6fa143-264c-498d-9088-0137213f34c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6678d7d5d-qxzs2_calico-system(1a6fa143-264c-498d-9088-0137213f34c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e67ce2b428a9c53865ee165f4a01760be75bbf63cb747bb42433cc71b47308a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6678d7d5d-qxzs2" podUID="1a6fa143-264c-498d-9088-0137213f34c3" Apr 16 23:47:29.949446 containerd[1563]: time="2026-04-16T23:47:29.949396221Z" level=error msg="Failed to destroy network for sandbox \"fe997b9bc5bb49cc1b158b90b0eed0980c784afedf4e8ad1b7d1ac7a0acf2052\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:29.951559 containerd[1563]: time="2026-04-16T23:47:29.951505213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s2djx,Uid:ae053e49-d4f7-4e65-86ff-52aeb12ce78a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe997b9bc5bb49cc1b158b90b0eed0980c784afedf4e8ad1b7d1ac7a0acf2052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:29.953081 kubelet[2803]: E0416 23:47:29.952779 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe997b9bc5bb49cc1b158b90b0eed0980c784afedf4e8ad1b7d1ac7a0acf2052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:29.953227 kubelet[2803]: E0416 23:47:29.953126 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe997b9bc5bb49cc1b158b90b0eed0980c784afedf4e8ad1b7d1ac7a0acf2052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-s2djx" Apr 16 23:47:29.953227 kubelet[2803]: E0416 23:47:29.953160 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe997b9bc5bb49cc1b158b90b0eed0980c784afedf4e8ad1b7d1ac7a0acf2052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-s2djx" Apr 16 23:47:29.953338 kubelet[2803]: E0416 23:47:29.953229 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-s2djx_kube-system(ae053e49-d4f7-4e65-86ff-52aeb12ce78a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-s2djx_kube-system(ae053e49-d4f7-4e65-86ff-52aeb12ce78a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe997b9bc5bb49cc1b158b90b0eed0980c784afedf4e8ad1b7d1ac7a0acf2052\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-s2djx" podUID="ae053e49-d4f7-4e65-86ff-52aeb12ce78a" Apr 16 23:47:29.956311 systemd[1]: Started cri-containerd-ce2cd579035b01041e7fa1f3fcaf6e26bcc5e4473c52f1291e8de9d4809060a6.scope - libcontainer container ce2cd579035b01041e7fa1f3fcaf6e26bcc5e4473c52f1291e8de9d4809060a6. Apr 16 23:47:29.964894 containerd[1563]: time="2026-04-16T23:47:29.964835163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8654597d95-sdzr5,Uid:8945b8a7-e39d-459e-b827-9b525646bee6,Namespace:calico-system,Attempt:0,}" Apr 16 23:47:29.979442 containerd[1563]: time="2026-04-16T23:47:29.979396918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gqpb9,Uid:2700e30b-1b7a-4e9b-9459-dc293eca7042,Namespace:kube-system,Attempt:0,}" Apr 16 23:47:30.002237 containerd[1563]: time="2026-04-16T23:47:30.001225353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c69794945-s56n5,Uid:18374daf-2055-4b0b-8c9d-1ca849a297fc,Namespace:calico-system,Attempt:0,}" Apr 16 23:47:30.017948 containerd[1563]: time="2026-04-16T23:47:30.017810431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c69794945-pkspw,Uid:1ed38fd9-69a8-4273-aed3-ce32f3fe1e49,Namespace:calico-system,Attempt:0,}" Apr 16 23:47:30.046662 containerd[1563]: time="2026-04-16T23:47:30.046505855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-7d5sl,Uid:e4023aaa-91d8-4d6e-be91-82fbe65c18a7,Namespace:calico-system,Attempt:0,}" Apr 16 23:47:30.119863 systemd[1]: run-netns-cni\x2d11d7cb06\x2d904e\x2dadc4\x2dfa4b\x2d2487e0ea0e00.mount: Deactivated successfully. Apr 16 23:47:30.261326 containerd[1563]: time="2026-04-16T23:47:30.261277070Z" level=info msg="StartContainer for \"ce2cd579035b01041e7fa1f3fcaf6e26bcc5e4473c52f1291e8de9d4809060a6\" returns successfully" Apr 16 23:47:30.303687 containerd[1563]: time="2026-04-16T23:47:30.300329509Z" level=error msg="Failed to destroy network for sandbox \"575faaedbeeacd75df081ea8812c3fde66f3abd56af4deaea4d37be096178778\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:30.309086 systemd[1]: run-netns-cni\x2d446d11d5\x2dd39b\x2d3708\x2dba10\x2d3ad8801ac932.mount: Deactivated successfully. Apr 16 23:47:30.313557 containerd[1563]: time="2026-04-16T23:47:30.313388878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gqpb9,Uid:2700e30b-1b7a-4e9b-9459-dc293eca7042,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"575faaedbeeacd75df081ea8812c3fde66f3abd56af4deaea4d37be096178778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:30.314377 kubelet[2803]: E0416 23:47:30.314266 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575faaedbeeacd75df081ea8812c3fde66f3abd56af4deaea4d37be096178778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:30.314598 kubelet[2803]: E0416 23:47:30.314559 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575faaedbeeacd75df081ea8812c3fde66f3abd56af4deaea4d37be096178778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gqpb9" Apr 16 23:47:30.314678 kubelet[2803]: E0416 23:47:30.314611 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575faaedbeeacd75df081ea8812c3fde66f3abd56af4deaea4d37be096178778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gqpb9" Apr 16 23:47:30.317020 kubelet[2803]: E0416 23:47:30.315396 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-gqpb9_kube-system(2700e30b-1b7a-4e9b-9459-dc293eca7042)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-gqpb9_kube-system(2700e30b-1b7a-4e9b-9459-dc293eca7042)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"575faaedbeeacd75df081ea8812c3fde66f3abd56af4deaea4d37be096178778\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-gqpb9" podUID="2700e30b-1b7a-4e9b-9459-dc293eca7042" Apr 16 23:47:30.342863 containerd[1563]: time="2026-04-16T23:47:30.342018649Z" level=error msg="Failed to destroy network for sandbox \"dd7ebb9befe69fffddc55ba724d2b4cb2baed03500a3422d54a9c037590bfb2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:30.349247 systemd[1]: run-netns-cni\x2d7326e99a\x2d560d\x2d161b\x2df447\x2d894076db9a2e.mount: Deactivated successfully. Apr 16 23:47:30.353073 containerd[1563]: time="2026-04-16T23:47:30.352987854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c69794945-pkspw,Uid:1ed38fd9-69a8-4273-aed3-ce32f3fe1e49,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd7ebb9befe69fffddc55ba724d2b4cb2baed03500a3422d54a9c037590bfb2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:30.353756 kubelet[2803]: E0416 23:47:30.353578 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd7ebb9befe69fffddc55ba724d2b4cb2baed03500a3422d54a9c037590bfb2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:30.353756 kubelet[2803]: E0416 23:47:30.353653 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd7ebb9befe69fffddc55ba724d2b4cb2baed03500a3422d54a9c037590bfb2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-c69794945-pkspw" Apr 16 23:47:30.353756 kubelet[2803]: E0416 23:47:30.353686 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd7ebb9befe69fffddc55ba724d2b4cb2baed03500a3422d54a9c037590bfb2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-c69794945-pkspw" Apr 16 23:47:30.354391 kubelet[2803]: E0416 23:47:30.354051 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c69794945-pkspw_calico-system(1ed38fd9-69a8-4273-aed3-ce32f3fe1e49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c69794945-pkspw_calico-system(1ed38fd9-69a8-4273-aed3-ce32f3fe1e49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd7ebb9befe69fffddc55ba724d2b4cb2baed03500a3422d54a9c037590bfb2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-c69794945-pkspw" podUID="1ed38fd9-69a8-4273-aed3-ce32f3fe1e49" Apr 16 23:47:30.376458 containerd[1563]: time="2026-04-16T23:47:30.376397419Z" level=error msg="Failed to destroy network for sandbox \"7b47949481307b7445bc6313f3e5fb091a068eddcea66756e925355d3f04f109\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:30.381132 containerd[1563]: time="2026-04-16T23:47:30.380072637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-7d5sl,Uid:e4023aaa-91d8-4d6e-be91-82fbe65c18a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b47949481307b7445bc6313f3e5fb091a068eddcea66756e925355d3f04f109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:30.384541 kubelet[2803]: E0416 23:47:30.381672 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b47949481307b7445bc6313f3e5fb091a068eddcea66756e925355d3f04f109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:30.384541 kubelet[2803]: E0416 23:47:30.382398 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b47949481307b7445bc6313f3e5fb091a068eddcea66756e925355d3f04f109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-7d5sl" Apr 16 23:47:30.384541 kubelet[2803]: E0416 23:47:30.382447 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b47949481307b7445bc6313f3e5fb091a068eddcea66756e925355d3f04f109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-7d5sl" Apr 16 23:47:30.381974 systemd[1]: run-netns-cni\x2d56ef9329\x2d01cb\x2dfcc1\x2d504a\x2dcc2e237342a1.mount: Deactivated successfully. Apr 16 23:47:30.384890 kubelet[2803]: E0416 23:47:30.382526 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-7d5sl_calico-system(e4023aaa-91d8-4d6e-be91-82fbe65c18a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-7d5sl_calico-system(e4023aaa-91d8-4d6e-be91-82fbe65c18a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b47949481307b7445bc6313f3e5fb091a068eddcea66756e925355d3f04f109\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-7d5sl" podUID="e4023aaa-91d8-4d6e-be91-82fbe65c18a7" Apr 16 23:47:30.389461 containerd[1563]: time="2026-04-16T23:47:30.389369905Z" level=error msg="Failed to destroy network for sandbox \"47550dc43898476763483acee2de9d7a4b17228bfeee8a0576dc79dff0ad419b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:30.392055 containerd[1563]: time="2026-04-16T23:47:30.392004360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8654597d95-sdzr5,Uid:8945b8a7-e39d-459e-b827-9b525646bee6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"47550dc43898476763483acee2de9d7a4b17228bfeee8a0576dc79dff0ad419b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:30.392314 kubelet[2803]: E0416 23:47:30.392266 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47550dc43898476763483acee2de9d7a4b17228bfeee8a0576dc79dff0ad419b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:30.392442 kubelet[2803]: E0416 23:47:30.392327 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47550dc43898476763483acee2de9d7a4b17228bfeee8a0576dc79dff0ad419b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8654597d95-sdzr5" Apr 16 23:47:30.392442 kubelet[2803]: E0416 23:47:30.392355 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47550dc43898476763483acee2de9d7a4b17228bfeee8a0576dc79dff0ad419b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8654597d95-sdzr5" Apr 16 23:47:30.392442 kubelet[2803]: E0416 23:47:30.392434 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8654597d95-sdzr5_calico-system(8945b8a7-e39d-459e-b827-9b525646bee6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8654597d95-sdzr5_calico-system(8945b8a7-e39d-459e-b827-9b525646bee6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47550dc43898476763483acee2de9d7a4b17228bfeee8a0576dc79dff0ad419b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8654597d95-sdzr5" podUID="8945b8a7-e39d-459e-b827-9b525646bee6" Apr 16 23:47:30.418492 containerd[1563]: time="2026-04-16T23:47:30.418350831Z" level=error msg="Failed to destroy network for sandbox \"09f648d3b8cc9fedbb86b6c139d7771fdb3642c786360dd7d5232f49d2482a2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:30.420235 containerd[1563]: time="2026-04-16T23:47:30.420159678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c69794945-s56n5,Uid:18374daf-2055-4b0b-8c9d-1ca849a297fc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09f648d3b8cc9fedbb86b6c139d7771fdb3642c786360dd7d5232f49d2482a2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:30.420852 kubelet[2803]: E0416 23:47:30.420736 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09f648d3b8cc9fedbb86b6c139d7771fdb3642c786360dd7d5232f49d2482a2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:47:30.420992 kubelet[2803]: E0416 23:47:30.420868 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09f648d3b8cc9fedbb86b6c139d7771fdb3642c786360dd7d5232f49d2482a2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-c69794945-s56n5" Apr 16 23:47:30.420992 kubelet[2803]: E0416 23:47:30.420900 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09f648d3b8cc9fedbb86b6c139d7771fdb3642c786360dd7d5232f49d2482a2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-c69794945-s56n5" Apr 16 23:47:30.422260 kubelet[2803]: E0416 23:47:30.421037 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c69794945-s56n5_calico-system(18374daf-2055-4b0b-8c9d-1ca849a297fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c69794945-s56n5_calico-system(18374daf-2055-4b0b-8c9d-1ca849a297fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09f648d3b8cc9fedbb86b6c139d7771fdb3642c786360dd7d5232f49d2482a2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-c69794945-s56n5" podUID="18374daf-2055-4b0b-8c9d-1ca849a297fc" Apr 16 23:47:30.547708 systemd[1]: Created slice kubepods-besteffort-pod45c1d843_c2b2_409a_bd42_84f1b45c185c.slice - libcontainer container kubepods-besteffort-pod45c1d843_c2b2_409a_bd42_84f1b45c185c.slice. Apr 16 23:47:30.558149 containerd[1563]: time="2026-04-16T23:47:30.557752789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22dv,Uid:45c1d843-c2b2-409a-bd42-84f1b45c185c,Namespace:calico-system,Attempt:0,}" Apr 16 23:47:30.794948 kubelet[2803]: I0416 23:47:30.794867 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sdxzd" podStartSLOduration=3.901659491 podStartE2EDuration="20.794843089s" podCreationTimestamp="2026-04-16 23:47:10 +0000 UTC" firstStartedPulling="2026-04-16 23:47:10.960813041 +0000 UTC m=+18.671997062" lastFinishedPulling="2026-04-16 23:47:27.853996654 +0000 UTC m=+35.565180660" observedRunningTime="2026-04-16 23:47:30.792227844 +0000 UTC m=+38.503411855" watchObservedRunningTime="2026-04-16 23:47:30.794843089 +0000 UTC m=+38.506027112" Apr 16 23:47:30.860898 systemd-networkd[1427]: cali18e0614d93c: Link UP Apr 16 23:47:30.861603 systemd-networkd[1427]: cali18e0614d93c: Gained carrier Apr 16 23:47:30.881579 kubelet[2803]: I0416 23:47:30.880041 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1a6fa143-264c-498d-9088-0137213f34c3-whisker-backend-key-pair\") pod \"1a6fa143-264c-498d-9088-0137213f34c3\" (UID: \"1a6fa143-264c-498d-9088-0137213f34c3\") " Apr 16 23:47:30.881579 kubelet[2803]: I0416 23:47:30.880468 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1a6fa143-264c-498d-9088-0137213f34c3-nginx-config\") pod \"1a6fa143-264c-498d-9088-0137213f34c3\" (UID: \"1a6fa143-264c-498d-9088-0137213f34c3\") " Apr 16 23:47:30.881579 kubelet[2803]: I0416 23:47:30.880504 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a6fa143-264c-498d-9088-0137213f34c3-whisker-ca-bundle\") pod \"1a6fa143-264c-498d-9088-0137213f34c3\" (UID: \"1a6fa143-264c-498d-9088-0137213f34c3\") " Apr 16 23:47:30.881579 kubelet[2803]: I0416 23:47:30.880553 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qshwx\" (UniqueName: \"kubernetes.io/projected/1a6fa143-264c-498d-9088-0137213f34c3-kube-api-access-qshwx\") pod \"1a6fa143-264c-498d-9088-0137213f34c3\" (UID: \"1a6fa143-264c-498d-9088-0137213f34c3\") " Apr 16 23:47:30.888123 kubelet[2803]: I0416 23:47:30.887367 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6fa143-264c-498d-9088-0137213f34c3-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "1a6fa143-264c-498d-9088-0137213f34c3" (UID: "1a6fa143-264c-498d-9088-0137213f34c3"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:47:30.888123 kubelet[2803]: I0416 23:47:30.887914 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6fa143-264c-498d-9088-0137213f34c3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1a6fa143-264c-498d-9088-0137213f34c3" (UID: "1a6fa143-264c-498d-9088-0137213f34c3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:47:30.892334 containerd[1563]: 2026-04-16 23:47:30.621 [ERROR][3805] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:47:30.892334 containerd[1563]: 2026-04-16 23:47:30.644 [INFO][3805] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-csi--node--driver--z22dv-eth0 csi-node-driver- calico-system 45c1d843-c2b2-409a-bd42-84f1b45c185c 682 0 2026-04-16 23:47:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a csi-node-driver-z22dv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali18e0614d93c [] [] }} ContainerID="52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" Namespace="calico-system" Pod="csi-node-driver-z22dv" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-csi--node--driver--z22dv-" Apr 16 23:47:30.892334 containerd[1563]: 2026-04-16 23:47:30.646 [INFO][3805] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" Namespace="calico-system" Pod="csi-node-driver-z22dv" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-csi--node--driver--z22dv-eth0" Apr 16 23:47:30.892334 containerd[1563]: 2026-04-16 23:47:30.699 [INFO][3823] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" HandleID="k8s-pod-network.52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-csi--node--driver--z22dv-eth0" Apr 16 23:47:30.894512 containerd[1563]: 2026-04-16 23:47:30.710 [INFO][3823] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" HandleID="k8s-pod-network.52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-csi--node--driver--z22dv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", "pod":"csi-node-driver-z22dv", "timestamp":"2026-04-16 23:47:30.69967174 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002151e0)} Apr 16 23:47:30.894512 containerd[1563]: 2026-04-16 23:47:30.710 [INFO][3823] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:47:30.894512 containerd[1563]: 2026-04-16 23:47:30.710 [INFO][3823] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:47:30.894512 containerd[1563]: 2026-04-16 23:47:30.710 [INFO][3823] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a' Apr 16 23:47:30.894512 containerd[1563]: 2026-04-16 23:47:30.792 [INFO][3823] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:30.894512 containerd[1563]: 2026-04-16 23:47:30.804 [INFO][3823] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:30.894512 containerd[1563]: 2026-04-16 23:47:30.815 [INFO][3823] ipam/ipam.go 526: Trying affinity for 192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:30.894512 containerd[1563]: 2026-04-16 23:47:30.818 [INFO][3823] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:30.894962 kubelet[2803]: I0416 23:47:30.893998 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a6fa143-264c-498d-9088-0137213f34c3-kube-api-access-qshwx" (OuterVolumeSpecName: "kube-api-access-qshwx") pod "1a6fa143-264c-498d-9088-0137213f34c3" (UID: "1a6fa143-264c-498d-9088-0137213f34c3"). InnerVolumeSpecName "kube-api-access-qshwx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 23:47:30.895032 containerd[1563]: 2026-04-16 23:47:30.824 [INFO][3823] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:30.895032 containerd[1563]: 2026-04-16 23:47:30.824 [INFO][3823] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.0/26 handle="k8s-pod-network.52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:30.895032 containerd[1563]: 2026-04-16 23:47:30.827 [INFO][3823] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8 Apr 16 23:47:30.895032 containerd[1563]: 2026-04-16 23:47:30.833 [INFO][3823] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.0/26 handle="k8s-pod-network.52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:30.895032 containerd[1563]: 2026-04-16 23:47:30.841 [INFO][3823] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.1/26] block=192.168.65.0/26 handle="k8s-pod-network.52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:30.895032 containerd[1563]: 2026-04-16 23:47:30.841 [INFO][3823] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.1/26] handle="k8s-pod-network.52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:30.895032 containerd[1563]: 2026-04-16 23:47:30.842 [INFO][3823] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:47:30.895032 containerd[1563]: 2026-04-16 23:47:30.842 [INFO][3823] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.1/26] IPv6=[] ContainerID="52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" HandleID="k8s-pod-network.52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-csi--node--driver--z22dv-eth0" Apr 16 23:47:30.895436 containerd[1563]: 2026-04-16 23:47:30.848 [INFO][3805] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" Namespace="calico-system" Pod="csi-node-driver-z22dv" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-csi--node--driver--z22dv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-csi--node--driver--z22dv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45c1d843-c2b2-409a-bd42-84f1b45c185c", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"", Pod:"csi-node-driver-z22dv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18e0614d93c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:30.895567 containerd[1563]: 2026-04-16 23:47:30.848 [INFO][3805] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.1/32] ContainerID="52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" Namespace="calico-system" Pod="csi-node-driver-z22dv" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-csi--node--driver--z22dv-eth0" Apr 16 23:47:30.895567 containerd[1563]: 2026-04-16 23:47:30.848 [INFO][3805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18e0614d93c ContainerID="52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" Namespace="calico-system" Pod="csi-node-driver-z22dv" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-csi--node--driver--z22dv-eth0" Apr 16 23:47:30.895567 containerd[1563]: 2026-04-16 23:47:30.861 [INFO][3805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" Namespace="calico-system" Pod="csi-node-driver-z22dv" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-csi--node--driver--z22dv-eth0" Apr 16 23:47:30.897590 containerd[1563]: 2026-04-16 23:47:30.862 [INFO][3805] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" Namespace="calico-system" Pod="csi-node-driver-z22dv" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-csi--node--driver--z22dv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-csi--node--driver--z22dv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45c1d843-c2b2-409a-bd42-84f1b45c185c", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8", Pod:"csi-node-driver-z22dv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18e0614d93c", MAC:"ca:3d:bd:95:88:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:30.897823 containerd[1563]: 2026-04-16 23:47:30.880 [INFO][3805] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" Namespace="calico-system" Pod="csi-node-driver-z22dv" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-csi--node--driver--z22dv-eth0" Apr 16 23:47:30.906181 kubelet[2803]: I0416 23:47:30.906071 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a6fa143-264c-498d-9088-0137213f34c3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1a6fa143-264c-498d-9088-0137213f34c3" (UID: "1a6fa143-264c-498d-9088-0137213f34c3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 23:47:30.923782 containerd[1563]: time="2026-04-16T23:47:30.923648778Z" level=info msg="connecting to shim 52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8" address="unix:///run/containerd/s/f0a0f1bbaba51b96be4028e1318f07089703ee9634ece49f906137a605328e8d" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:47:30.955316 systemd[1]: Started cri-containerd-52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8.scope - libcontainer container 52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8. Apr 16 23:47:30.981531 kubelet[2803]: I0416 23:47:30.981476 2803 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1a6fa143-264c-498d-9088-0137213f34c3-nginx-config\") on node \"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" DevicePath \"\"" Apr 16 23:47:30.982184 kubelet[2803]: I0416 23:47:30.982158 2803 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a6fa143-264c-498d-9088-0137213f34c3-whisker-ca-bundle\") on node \"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" DevicePath \"\"" Apr 16 23:47:30.982736 kubelet[2803]: I0416 23:47:30.982377 2803 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qshwx\" (UniqueName: \"kubernetes.io/projected/1a6fa143-264c-498d-9088-0137213f34c3-kube-api-access-qshwx\") on node \"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" DevicePath \"\"" Apr 16 23:47:30.983029 kubelet[2803]: I0416 23:47:30.982403 2803 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1a6fa143-264c-498d-9088-0137213f34c3-whisker-backend-key-pair\") on node \"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a\" DevicePath \"\"" Apr 16 23:47:30.992256 containerd[1563]: time="2026-04-16T23:47:30.991997926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22dv,Uid:45c1d843-c2b2-409a-bd42-84f1b45c185c,Namespace:calico-system,Attempt:0,} returns sandbox id \"52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8\"" Apr 16 23:47:30.995524 containerd[1563]: time="2026-04-16T23:47:30.995451709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 23:47:31.055775 systemd[1]: run-netns-cni\x2db8bb6dd2\x2da57e\x2dc10f\x2d7d76\x2dd68ae5c3fb44.mount: Deactivated successfully. Apr 16 23:47:31.055924 systemd[1]: run-netns-cni\x2dff1886b7\x2df097\x2d5836\x2d907e\x2d71279b876578.mount: Deactivated successfully. Apr 16 23:47:31.056035 systemd[1]: var-lib-kubelet-pods-1a6fa143\x2d264c\x2d498d\x2d9088\x2d0137213f34c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqshwx.mount: Deactivated successfully. Apr 16 23:47:31.056159 systemd[1]: var-lib-kubelet-pods-1a6fa143\x2d264c\x2d498d\x2d9088\x2d0137213f34c3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 23:47:31.775911 systemd[1]: Removed slice kubepods-besteffort-pod1a6fa143_264c_498d_9088_0137213f34c3.slice - libcontainer container kubepods-besteffort-pod1a6fa143_264c_498d_9088_0137213f34c3.slice. Apr 16 23:47:31.870930 systemd[1]: Created slice kubepods-besteffort-poda40696b7_5f87_4ecc_ac8a_5f38728fe516.slice - libcontainer container kubepods-besteffort-poda40696b7_5f87_4ecc_ac8a_5f38728fe516.slice. Apr 16 23:47:31.989610 kubelet[2803]: I0416 23:47:31.989476 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a40696b7-5f87-4ecc-ac8a-5f38728fe516-nginx-config\") pod \"whisker-585f57b96c-44vh9\" (UID: \"a40696b7-5f87-4ecc-ac8a-5f38728fe516\") " pod="calico-system/whisker-585f57b96c-44vh9" Apr 16 23:47:31.991926 kubelet[2803]: I0416 23:47:31.990930 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a40696b7-5f87-4ecc-ac8a-5f38728fe516-whisker-backend-key-pair\") pod \"whisker-585f57b96c-44vh9\" (UID: \"a40696b7-5f87-4ecc-ac8a-5f38728fe516\") " pod="calico-system/whisker-585f57b96c-44vh9" Apr 16 23:47:31.991926 kubelet[2803]: I0416 23:47:31.991411 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j55fq\" (UniqueName: \"kubernetes.io/projected/a40696b7-5f87-4ecc-ac8a-5f38728fe516-kube-api-access-j55fq\") pod \"whisker-585f57b96c-44vh9\" (UID: \"a40696b7-5f87-4ecc-ac8a-5f38728fe516\") " pod="calico-system/whisker-585f57b96c-44vh9" Apr 16 23:47:31.992319 kubelet[2803]: I0416 23:47:31.991983 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a40696b7-5f87-4ecc-ac8a-5f38728fe516-whisker-ca-bundle\") pod \"whisker-585f57b96c-44vh9\" (UID: \"a40696b7-5f87-4ecc-ac8a-5f38728fe516\") " pod="calico-system/whisker-585f57b96c-44vh9" Apr 16 23:47:32.073968 systemd-networkd[1427]: cali18e0614d93c: Gained IPv6LL Apr 16 23:47:32.185831 containerd[1563]: time="2026-04-16T23:47:32.185781311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-585f57b96c-44vh9,Uid:a40696b7-5f87-4ecc-ac8a-5f38728fe516,Namespace:calico-system,Attempt:0,}" Apr 16 23:47:32.434071 containerd[1563]: time="2026-04-16T23:47:32.433581571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:32.439503 containerd[1563]: time="2026-04-16T23:47:32.439441180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 16 23:47:32.441765 containerd[1563]: time="2026-04-16T23:47:32.441723385Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:32.450119 containerd[1563]: time="2026-04-16T23:47:32.448950833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:32.453373 containerd[1563]: time="2026-04-16T23:47:32.453313533Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.45762611s" Apr 16 23:47:32.453530 containerd[1563]: time="2026-04-16T23:47:32.453499723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 16 23:47:32.462383 containerd[1563]: time="2026-04-16T23:47:32.462340385Z" level=info msg="CreateContainer within sandbox \"52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 23:47:32.496133 containerd[1563]: time="2026-04-16T23:47:32.493657587Z" level=info msg="Container daccdd9986852c9829af8ff80fdfc5fa2057502671c7b798c240287772e5eee7: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:32.509968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount958067033.mount: Deactivated successfully. Apr 16 23:47:32.527261 containerd[1563]: time="2026-04-16T23:47:32.526554360Z" level=info msg="CreateContainer within sandbox \"52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"daccdd9986852c9829af8ff80fdfc5fa2057502671c7b798c240287772e5eee7\"" Apr 16 23:47:32.531341 containerd[1563]: time="2026-04-16T23:47:32.531247619Z" level=info msg="StartContainer for \"daccdd9986852c9829af8ff80fdfc5fa2057502671c7b798c240287772e5eee7\"" Apr 16 23:47:32.540692 containerd[1563]: time="2026-04-16T23:47:32.540575754Z" level=info msg="connecting to shim daccdd9986852c9829af8ff80fdfc5fa2057502671c7b798c240287772e5eee7" address="unix:///run/containerd/s/f0a0f1bbaba51b96be4028e1318f07089703ee9634ece49f906137a605328e8d" protocol=ttrpc version=3 Apr 16 23:47:32.541240 kubelet[2803]: I0416 23:47:32.541008 2803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a6fa143-264c-498d-9088-0137213f34c3" path="/var/lib/kubelet/pods/1a6fa143-264c-498d-9088-0137213f34c3/volumes" Apr 16 23:47:32.600744 systemd[1]: Started cri-containerd-daccdd9986852c9829af8ff80fdfc5fa2057502671c7b798c240287772e5eee7.scope - libcontainer container daccdd9986852c9829af8ff80fdfc5fa2057502671c7b798c240287772e5eee7. Apr 16 23:47:32.622044 systemd-networkd[1427]: cali96cd330201b: Link UP Apr 16 23:47:32.622836 systemd-networkd[1427]: cali96cd330201b: Gained carrier Apr 16 23:47:32.676124 containerd[1563]: 2026-04-16 23:47:32.332 [ERROR][3967] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:47:32.676124 containerd[1563]: 2026-04-16 23:47:32.381 [INFO][3967] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-whisker--585f57b96c--44vh9-eth0 whisker-585f57b96c- calico-system a40696b7-5f87-4ecc-ac8a-5f38728fe516 893 0 2026-04-16 23:47:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:585f57b96c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a whisker-585f57b96c-44vh9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali96cd330201b [] [] }} ContainerID="25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" Namespace="calico-system" Pod="whisker-585f57b96c-44vh9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-whisker--585f57b96c--44vh9-" Apr 16 23:47:32.676124 containerd[1563]: 2026-04-16 23:47:32.381 [INFO][3967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" Namespace="calico-system" Pod="whisker-585f57b96c-44vh9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-whisker--585f57b96c--44vh9-eth0" Apr 16 23:47:32.676124 containerd[1563]: 2026-04-16 23:47:32.484 [INFO][4008] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" HandleID="k8s-pod-network.25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-whisker--585f57b96c--44vh9-eth0" Apr 16 23:47:32.676554 containerd[1563]: 2026-04-16 23:47:32.514 [INFO][4008] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" HandleID="k8s-pod-network.25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-whisker--585f57b96c--44vh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037eef0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", "pod":"whisker-585f57b96c-44vh9", "timestamp":"2026-04-16 23:47:32.484037733 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f0580)} Apr 16 23:47:32.676554 containerd[1563]: 2026-04-16 23:47:32.514 [INFO][4008] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:47:32.676554 containerd[1563]: 2026-04-16 23:47:32.514 [INFO][4008] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:47:32.676554 containerd[1563]: 2026-04-16 23:47:32.515 [INFO][4008] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a' Apr 16 23:47:32.676554 containerd[1563]: 2026-04-16 23:47:32.529 [INFO][4008] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:32.676554 containerd[1563]: 2026-04-16 23:47:32.545 [INFO][4008] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:32.676554 containerd[1563]: 2026-04-16 23:47:32.558 [INFO][4008] ipam/ipam.go 526: Trying affinity for 192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:32.676554 containerd[1563]: 2026-04-16 23:47:32.562 [INFO][4008] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:32.676962 containerd[1563]: 2026-04-16 23:47:32.569 [INFO][4008] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:32.676962 containerd[1563]: 2026-04-16 23:47:32.569 [INFO][4008] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.0/26 handle="k8s-pod-network.25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:32.676962 containerd[1563]: 2026-04-16 23:47:32.572 [INFO][4008] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe Apr 16 23:47:32.676962 containerd[1563]: 2026-04-16 23:47:32.583 [INFO][4008] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.0/26 handle="k8s-pod-network.25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:32.676962 containerd[1563]: 2026-04-16 23:47:32.592 [INFO][4008] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.2/26] block=192.168.65.0/26 handle="k8s-pod-network.25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:32.676962 containerd[1563]: 2026-04-16 23:47:32.592 [INFO][4008] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.2/26] handle="k8s-pod-network.25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:32.676962 containerd[1563]: 2026-04-16 23:47:32.592 [INFO][4008] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:47:32.676962 containerd[1563]: 2026-04-16 23:47:32.592 [INFO][4008] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.2/26] IPv6=[] ContainerID="25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" HandleID="k8s-pod-network.25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-whisker--585f57b96c--44vh9-eth0" Apr 16 23:47:32.678858 containerd[1563]: 2026-04-16 23:47:32.609 [INFO][3967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" Namespace="calico-system" Pod="whisker-585f57b96c-44vh9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-whisker--585f57b96c--44vh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-whisker--585f57b96c--44vh9-eth0", GenerateName:"whisker-585f57b96c-", Namespace:"calico-system", SelfLink:"", UID:"a40696b7-5f87-4ecc-ac8a-5f38728fe516", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 47, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"585f57b96c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"", Pod:"whisker-585f57b96c-44vh9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.65.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali96cd330201b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:32.678988 containerd[1563]: 2026-04-16 23:47:32.609 [INFO][3967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.2/32] ContainerID="25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" Namespace="calico-system" Pod="whisker-585f57b96c-44vh9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-whisker--585f57b96c--44vh9-eth0" Apr 16 23:47:32.678988 containerd[1563]: 2026-04-16 23:47:32.610 [INFO][3967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96cd330201b ContainerID="25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" Namespace="calico-system" Pod="whisker-585f57b96c-44vh9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-whisker--585f57b96c--44vh9-eth0" Apr 16 23:47:32.678988 containerd[1563]: 2026-04-16 23:47:32.625 [INFO][3967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" Namespace="calico-system" Pod="whisker-585f57b96c-44vh9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-whisker--585f57b96c--44vh9-eth0" Apr 16 23:47:32.679861 containerd[1563]: 2026-04-16 23:47:32.634 [INFO][3967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" Namespace="calico-system" Pod="whisker-585f57b96c-44vh9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-whisker--585f57b96c--44vh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-whisker--585f57b96c--44vh9-eth0", GenerateName:"whisker-585f57b96c-", Namespace:"calico-system", SelfLink:"", UID:"a40696b7-5f87-4ecc-ac8a-5f38728fe516", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 47, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"585f57b96c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe", Pod:"whisker-585f57b96c-44vh9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.65.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali96cd330201b", MAC:"3e:d6:e3:5d:12:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:32.680862 containerd[1563]: 2026-04-16 23:47:32.670 [INFO][3967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" Namespace="calico-system" Pod="whisker-585f57b96c-44vh9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-whisker--585f57b96c--44vh9-eth0" Apr 16 23:47:32.730681 containerd[1563]: time="2026-04-16T23:47:32.730629103Z" level=info msg="connecting to shim 25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe" address="unix:///run/containerd/s/5e5db236e4ed26d33e17c04851b8a264bf0c54534b5d2f5d3aec49395d2f3c48" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:47:32.797617 systemd[1]: Started cri-containerd-25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe.scope - libcontainer container 25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe. Apr 16 23:47:32.911266 containerd[1563]: time="2026-04-16T23:47:32.911217480Z" level=info msg="StartContainer for \"daccdd9986852c9829af8ff80fdfc5fa2057502671c7b798c240287772e5eee7\" returns successfully" Apr 16 23:47:32.914335 containerd[1563]: time="2026-04-16T23:47:32.913952149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 23:47:33.056690 kubelet[2803]: I0416 23:47:33.056412 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:47:33.070884 containerd[1563]: time="2026-04-16T23:47:33.070362728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-585f57b96c-44vh9,Uid:a40696b7-5f87-4ecc-ac8a-5f38728fe516,Namespace:calico-system,Attempt:0,} returns sandbox id \"25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe\"" Apr 16 23:47:33.929240 systemd-networkd[1427]: cali96cd330201b: Gained IPv6LL Apr 16 23:47:33.931325 systemd-networkd[1427]: vxlan.calico: Link UP Apr 16 23:47:33.931332 systemd-networkd[1427]: vxlan.calico: Gained carrier Apr 16 23:47:34.619150 containerd[1563]: time="2026-04-16T23:47:34.618483796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:34.623517 containerd[1563]: time="2026-04-16T23:47:34.623474779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 16 23:47:34.624570 containerd[1563]: time="2026-04-16T23:47:34.624514006Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:34.630904 containerd[1563]: time="2026-04-16T23:47:34.630757254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:34.632707 containerd[1563]: time="2026-04-16T23:47:34.632496115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.71814693s" Apr 16 23:47:34.632707 containerd[1563]: time="2026-04-16T23:47:34.632537875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 16 23:47:34.635143 containerd[1563]: time="2026-04-16T23:47:34.634916785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 23:47:34.639870 containerd[1563]: time="2026-04-16T23:47:34.639410315Z" level=info msg="CreateContainer within sandbox \"52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 23:47:34.653284 containerd[1563]: time="2026-04-16T23:47:34.652386348Z" level=info msg="Container 5335772f9306986df384a706ebe7331410e753006760bee7ee6a3f5cba0d10d3: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:34.671640 containerd[1563]: time="2026-04-16T23:47:34.671594402Z" level=info msg="CreateContainer within sandbox \"52d054a85f24f77efa3c27db7ef310c35549327a33abc6b890955a9a3b1b95a8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5335772f9306986df384a706ebe7331410e753006760bee7ee6a3f5cba0d10d3\"" Apr 16 23:47:34.675183 containerd[1563]: time="2026-04-16T23:47:34.673493591Z" level=info msg="StartContainer for \"5335772f9306986df384a706ebe7331410e753006760bee7ee6a3f5cba0d10d3\"" Apr 16 23:47:34.678614 containerd[1563]: time="2026-04-16T23:47:34.678307919Z" level=info msg="connecting to shim 5335772f9306986df384a706ebe7331410e753006760bee7ee6a3f5cba0d10d3" address="unix:///run/containerd/s/f0a0f1bbaba51b96be4028e1318f07089703ee9634ece49f906137a605328e8d" protocol=ttrpc version=3 Apr 16 23:47:34.715324 systemd[1]: Started cri-containerd-5335772f9306986df384a706ebe7331410e753006760bee7ee6a3f5cba0d10d3.scope - libcontainer container 5335772f9306986df384a706ebe7331410e753006760bee7ee6a3f5cba0d10d3. Apr 16 23:47:34.820688 containerd[1563]: time="2026-04-16T23:47:34.820577815Z" level=info msg="StartContainer for \"5335772f9306986df384a706ebe7331410e753006760bee7ee6a3f5cba0d10d3\" returns successfully" Apr 16 23:47:35.639747 kubelet[2803]: I0416 23:47:35.639710 2803 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 23:47:35.639747 kubelet[2803]: I0416 23:47:35.639760 2803 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 23:47:35.649902 containerd[1563]: time="2026-04-16T23:47:35.649836529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:35.653121 containerd[1563]: time="2026-04-16T23:47:35.652988687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 16 23:47:35.653342 containerd[1563]: time="2026-04-16T23:47:35.653292979Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:35.667481 containerd[1563]: time="2026-04-16T23:47:35.667019583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:35.672305 containerd[1563]: time="2026-04-16T23:47:35.671749073Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.036790093s" Apr 16 23:47:35.672305 containerd[1563]: time="2026-04-16T23:47:35.671902554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 16 23:47:35.680932 containerd[1563]: time="2026-04-16T23:47:35.680898458Z" level=info msg="CreateContainer within sandbox \"25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 23:47:35.692955 containerd[1563]: time="2026-04-16T23:47:35.690271978Z" level=info msg="Container 7670da7d8ce44f5e60ec8839fbe17ffa13ad9f31090f5f878049f65962418e1b: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:35.703111 containerd[1563]: time="2026-04-16T23:47:35.703060373Z" level=info msg="CreateContainer within sandbox \"25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7670da7d8ce44f5e60ec8839fbe17ffa13ad9f31090f5f878049f65962418e1b\"" Apr 16 23:47:35.703960 containerd[1563]: time="2026-04-16T23:47:35.703910964Z" level=info msg="StartContainer for \"7670da7d8ce44f5e60ec8839fbe17ffa13ad9f31090f5f878049f65962418e1b\"" Apr 16 23:47:35.705724 containerd[1563]: time="2026-04-16T23:47:35.705688533Z" level=info msg="connecting to shim 7670da7d8ce44f5e60ec8839fbe17ffa13ad9f31090f5f878049f65962418e1b" address="unix:///run/containerd/s/5e5db236e4ed26d33e17c04851b8a264bf0c54534b5d2f5d3aec49395d2f3c48" protocol=ttrpc version=3 Apr 16 23:47:35.742308 systemd[1]: Started cri-containerd-7670da7d8ce44f5e60ec8839fbe17ffa13ad9f31090f5f878049f65962418e1b.scope - libcontainer container 7670da7d8ce44f5e60ec8839fbe17ffa13ad9f31090f5f878049f65962418e1b. Apr 16 23:47:35.819612 containerd[1563]: time="2026-04-16T23:47:35.819559845Z" level=info msg="StartContainer for \"7670da7d8ce44f5e60ec8839fbe17ffa13ad9f31090f5f878049f65962418e1b\" returns successfully" Apr 16 23:47:35.825061 containerd[1563]: time="2026-04-16T23:47:35.825015512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 23:47:35.848345 systemd-networkd[1427]: vxlan.calico: Gained IPv6LL Apr 16 23:47:35.856127 kubelet[2803]: I0416 23:47:35.855815 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z22dv" podStartSLOduration=22.214802023 podStartE2EDuration="25.855793875s" podCreationTimestamp="2026-04-16 23:47:10 +0000 UTC" firstStartedPulling="2026-04-16 23:47:30.993699238 +0000 UTC m=+38.704883236" lastFinishedPulling="2026-04-16 23:47:34.634691089 +0000 UTC m=+42.345875088" observedRunningTime="2026-04-16 23:47:35.85454841 +0000 UTC m=+43.565732433" watchObservedRunningTime="2026-04-16 23:47:35.855793875 +0000 UTC m=+43.566977908" Apr 16 23:47:37.166846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3155973736.mount: Deactivated successfully. Apr 16 23:47:37.188747 containerd[1563]: time="2026-04-16T23:47:37.188690278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:37.190279 containerd[1563]: time="2026-04-16T23:47:37.190066897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 16 23:47:37.191426 containerd[1563]: time="2026-04-16T23:47:37.191383137Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:37.194446 containerd[1563]: time="2026-04-16T23:47:37.194408450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:37.195652 containerd[1563]: time="2026-04-16T23:47:37.195323538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.370256936s" Apr 16 23:47:37.195652 containerd[1563]: time="2026-04-16T23:47:37.195364676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 16 23:47:37.201797 containerd[1563]: time="2026-04-16T23:47:37.201749504Z" level=info msg="CreateContainer within sandbox \"25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 23:47:37.211324 containerd[1563]: time="2026-04-16T23:47:37.211284900Z" level=info msg="Container ec36da5dfbfd58e0d2cceaca95ea0166160a2314e55d997ab44358ac69371f6a: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:37.224438 containerd[1563]: time="2026-04-16T23:47:37.224358795Z" level=info msg="CreateContainer within sandbox \"25ca8782f69355b54ccac94ca7b94e9cdf1bc3998fb36f5673f0bed4738c1fbe\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ec36da5dfbfd58e0d2cceaca95ea0166160a2314e55d997ab44358ac69371f6a\"" Apr 16 23:47:37.225556 containerd[1563]: time="2026-04-16T23:47:37.225345763Z" level=info msg="StartContainer for \"ec36da5dfbfd58e0d2cceaca95ea0166160a2314e55d997ab44358ac69371f6a\"" Apr 16 23:47:37.227857 containerd[1563]: time="2026-04-16T23:47:37.227819678Z" level=info msg="connecting to shim ec36da5dfbfd58e0d2cceaca95ea0166160a2314e55d997ab44358ac69371f6a" address="unix:///run/containerd/s/5e5db236e4ed26d33e17c04851b8a264bf0c54534b5d2f5d3aec49395d2f3c48" protocol=ttrpc version=3 Apr 16 23:47:37.262323 systemd[1]: Started cri-containerd-ec36da5dfbfd58e0d2cceaca95ea0166160a2314e55d997ab44358ac69371f6a.scope - libcontainer container ec36da5dfbfd58e0d2cceaca95ea0166160a2314e55d997ab44358ac69371f6a. Apr 16 23:47:37.333394 containerd[1563]: time="2026-04-16T23:47:37.333336301Z" level=info msg="StartContainer for \"ec36da5dfbfd58e0d2cceaca95ea0166160a2314e55d997ab44358ac69371f6a\" returns successfully" Apr 16 23:47:37.869468 kubelet[2803]: I0416 23:47:37.868431 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-585f57b96c-44vh9" podStartSLOduration=2.757450938 podStartE2EDuration="6.868408554s" podCreationTimestamp="2026-04-16 23:47:31 +0000 UTC" firstStartedPulling="2026-04-16 23:47:33.085755616 +0000 UTC m=+40.796939612" lastFinishedPulling="2026-04-16 23:47:37.196713223 +0000 UTC m=+44.907897228" observedRunningTime="2026-04-16 23:47:37.867655823 +0000 UTC m=+45.578839845" watchObservedRunningTime="2026-04-16 23:47:37.868408554 +0000 UTC m=+45.579592577" Apr 16 23:47:38.626061 ntpd[1665]: Listen normally on 6 vxlan.calico 192.168.65.0:123 Apr 16 23:47:38.626173 ntpd[1665]: Listen normally on 7 cali18e0614d93c [fe80::ecee:eeff:feee:eeee%4]:123 Apr 16 23:47:38.626631 ntpd[1665]: 16 Apr 23:47:38 ntpd[1665]: Listen normally on 6 vxlan.calico 192.168.65.0:123 Apr 16 23:47:38.626631 ntpd[1665]: 16 Apr 23:47:38 ntpd[1665]: Listen normally on 7 cali18e0614d93c [fe80::ecee:eeff:feee:eeee%4]:123 Apr 16 23:47:38.626631 ntpd[1665]: 16 Apr 23:47:38 ntpd[1665]: Listen normally on 8 cali96cd330201b [fe80::ecee:eeff:feee:eeee%5]:123 Apr 16 23:47:38.626631 ntpd[1665]: 16 Apr 23:47:38 ntpd[1665]: Listen normally on 9 vxlan.calico [fe80::6487:1bff:fe8a:9e07%6]:123 Apr 16 23:47:38.626222 ntpd[1665]: Listen normally on 8 cali96cd330201b [fe80::ecee:eeff:feee:eeee%5]:123 Apr 16 23:47:38.626265 ntpd[1665]: Listen normally on 9 vxlan.calico [fe80::6487:1bff:fe8a:9e07%6]:123 Apr 16 23:47:40.531708 containerd[1563]: time="2026-04-16T23:47:40.531651225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gqpb9,Uid:2700e30b-1b7a-4e9b-9459-dc293eca7042,Namespace:kube-system,Attempt:0,}" Apr 16 23:47:40.678817 systemd-networkd[1427]: cali71ff4bdb603: Link UP Apr 16 23:47:40.680397 systemd-networkd[1427]: cali71ff4bdb603: Gained carrier Apr 16 23:47:40.711073 containerd[1563]: 2026-04-16 23:47:40.586 [INFO][4394] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--gqpb9-eth0 coredns-66bc5c9577- kube-system 2700e30b-1b7a-4e9b-9459-dc293eca7042 833 0 2026-04-16 23:46:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a coredns-66bc5c9577-gqpb9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali71ff4bdb603 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" Namespace="kube-system" Pod="coredns-66bc5c9577-gqpb9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--gqpb9-" Apr 16 23:47:40.711073 containerd[1563]: 2026-04-16 23:47:40.586 [INFO][4394] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" Namespace="kube-system" Pod="coredns-66bc5c9577-gqpb9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--gqpb9-eth0" Apr 16 23:47:40.711073 containerd[1563]: 2026-04-16 23:47:40.628 [INFO][4405] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" HandleID="k8s-pod-network.a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--gqpb9-eth0" Apr 16 23:47:40.711437 containerd[1563]: 2026-04-16 23:47:40.637 [INFO][4405] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" HandleID="k8s-pod-network.a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--gqpb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000276170), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", "pod":"coredns-66bc5c9577-gqpb9", "timestamp":"2026-04-16 23:47:40.628002706 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001142c0)} Apr 16 23:47:40.711437 containerd[1563]: 2026-04-16 23:47:40.637 [INFO][4405] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:47:40.711437 containerd[1563]: 2026-04-16 23:47:40.637 [INFO][4405] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:47:40.711437 containerd[1563]: 2026-04-16 23:47:40.637 [INFO][4405] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a' Apr 16 23:47:40.711437 containerd[1563]: 2026-04-16 23:47:40.640 [INFO][4405] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:40.711437 containerd[1563]: 2026-04-16 23:47:40.645 [INFO][4405] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:40.711437 containerd[1563]: 2026-04-16 23:47:40.651 [INFO][4405] ipam/ipam.go 526: Trying affinity for 192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:40.711437 containerd[1563]: 2026-04-16 23:47:40.654 [INFO][4405] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:40.711888 containerd[1563]: 2026-04-16 23:47:40.656 [INFO][4405] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:40.711888 containerd[1563]: 2026-04-16 23:47:40.656 [INFO][4405] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.0/26 handle="k8s-pod-network.a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:40.711888 containerd[1563]: 2026-04-16 23:47:40.658 [INFO][4405] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705 Apr 16 23:47:40.711888 containerd[1563]: 2026-04-16 23:47:40.665 [INFO][4405] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.0/26 handle="k8s-pod-network.a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:40.711888 containerd[1563]: 2026-04-16 23:47:40.671 [INFO][4405] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.3/26] block=192.168.65.0/26 handle="k8s-pod-network.a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:40.711888 containerd[1563]: 2026-04-16 23:47:40.671 [INFO][4405] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.3/26] handle="k8s-pod-network.a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:40.711888 containerd[1563]: 2026-04-16 23:47:40.672 [INFO][4405] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:47:40.711888 containerd[1563]: 2026-04-16 23:47:40.672 [INFO][4405] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.3/26] IPv6=[] ContainerID="a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" HandleID="k8s-pod-network.a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--gqpb9-eth0" Apr 16 23:47:40.714204 containerd[1563]: 2026-04-16 23:47:40.674 [INFO][4394] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" Namespace="kube-system" Pod="coredns-66bc5c9577-gqpb9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--gqpb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--gqpb9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2700e30b-1b7a-4e9b-9459-dc293eca7042", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"", Pod:"coredns-66bc5c9577-gqpb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71ff4bdb603", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:40.714204 containerd[1563]: 2026-04-16 23:47:40.674 [INFO][4394] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.3/32] ContainerID="a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" Namespace="kube-system" Pod="coredns-66bc5c9577-gqpb9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--gqpb9-eth0" Apr 16 23:47:40.714204 containerd[1563]: 2026-04-16 23:47:40.675 [INFO][4394] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71ff4bdb603 ContainerID="a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" Namespace="kube-system" Pod="coredns-66bc5c9577-gqpb9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--gqpb9-eth0" Apr 16 23:47:40.714204 containerd[1563]: 2026-04-16 23:47:40.679 [INFO][4394] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" Namespace="kube-system" Pod="coredns-66bc5c9577-gqpb9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--gqpb9-eth0" Apr 16 23:47:40.714784 containerd[1563]: 2026-04-16 23:47:40.681 [INFO][4394] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" Namespace="kube-system" Pod="coredns-66bc5c9577-gqpb9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--gqpb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--gqpb9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2700e30b-1b7a-4e9b-9459-dc293eca7042", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705", Pod:"coredns-66bc5c9577-gqpb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71ff4bdb603", MAC:"62:76:70:a6:6b:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:40.714784 containerd[1563]: 2026-04-16 23:47:40.703 [INFO][4394] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" Namespace="kube-system" Pod="coredns-66bc5c9577-gqpb9" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--gqpb9-eth0" Apr 16 23:47:40.781616 containerd[1563]: time="2026-04-16T23:47:40.781561147Z" level=info msg="connecting to shim a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705" address="unix:///run/containerd/s/72743ce0e5604599d83f8cfda5ac1661a5d36fcc2bf07368bf96895899e72094" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:47:40.827586 systemd[1]: Started cri-containerd-a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705.scope - libcontainer container a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705. Apr 16 23:47:40.920503 containerd[1563]: time="2026-04-16T23:47:40.920443131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gqpb9,Uid:2700e30b-1b7a-4e9b-9459-dc293eca7042,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705\"" Apr 16 23:47:40.928314 containerd[1563]: time="2026-04-16T23:47:40.928265118Z" level=info msg="CreateContainer within sandbox \"a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:47:40.943166 containerd[1563]: time="2026-04-16T23:47:40.940369993Z" level=info msg="Container cb8efe749eaac6e1072702b1f385686b1b5aa858fc30dfaf8e0789ae7a542384: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:40.952240 containerd[1563]: time="2026-04-16T23:47:40.952191966Z" level=info msg="CreateContainer within sandbox \"a2cc6b2f5469aa955ab549e4684ea97361e4c8bfb1ae0c09176802ee911e8705\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb8efe749eaac6e1072702b1f385686b1b5aa858fc30dfaf8e0789ae7a542384\"" Apr 16 23:47:40.954139 containerd[1563]: time="2026-04-16T23:47:40.952975977Z" level=info msg="StartContainer for \"cb8efe749eaac6e1072702b1f385686b1b5aa858fc30dfaf8e0789ae7a542384\"" Apr 16 23:47:40.954446 containerd[1563]: time="2026-04-16T23:47:40.954414079Z" level=info msg="connecting to shim cb8efe749eaac6e1072702b1f385686b1b5aa858fc30dfaf8e0789ae7a542384" address="unix:///run/containerd/s/72743ce0e5604599d83f8cfda5ac1661a5d36fcc2bf07368bf96895899e72094" protocol=ttrpc version=3 Apr 16 23:47:40.977413 systemd[1]: Started cri-containerd-cb8efe749eaac6e1072702b1f385686b1b5aa858fc30dfaf8e0789ae7a542384.scope - libcontainer container cb8efe749eaac6e1072702b1f385686b1b5aa858fc30dfaf8e0789ae7a542384. Apr 16 23:47:41.024307 containerd[1563]: time="2026-04-16T23:47:41.024251734Z" level=info msg="StartContainer for \"cb8efe749eaac6e1072702b1f385686b1b5aa858fc30dfaf8e0789ae7a542384\" returns successfully" Apr 16 23:47:41.907334 kubelet[2803]: I0416 23:47:41.907236 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gqpb9" podStartSLOduration=44.907212592 podStartE2EDuration="44.907212592s" podCreationTimestamp="2026-04-16 23:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:47:41.888316031 +0000 UTC m=+49.599500052" watchObservedRunningTime="2026-04-16 23:47:41.907212592 +0000 UTC m=+49.618396619" Apr 16 23:47:41.929298 systemd-networkd[1427]: cali71ff4bdb603: Gained IPv6LL Apr 16 23:47:42.532611 containerd[1563]: time="2026-04-16T23:47:42.532563168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s2djx,Uid:ae053e49-d4f7-4e65-86ff-52aeb12ce78a,Namespace:kube-system,Attempt:0,}" Apr 16 23:47:42.688489 systemd-networkd[1427]: cali1ce7b981d69: Link UP Apr 16 23:47:42.689918 systemd-networkd[1427]: cali1ce7b981d69: Gained carrier Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.588 [INFO][4523] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--s2djx-eth0 coredns-66bc5c9577- kube-system ae053e49-d4f7-4e65-86ff-52aeb12ce78a 830 0 2026-04-16 23:46:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a coredns-66bc5c9577-s2djx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1ce7b981d69 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" Namespace="kube-system" Pod="coredns-66bc5c9577-s2djx" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--s2djx-" Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.588 [INFO][4523] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" Namespace="kube-system" Pod="coredns-66bc5c9577-s2djx" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--s2djx-eth0" Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.624 [INFO][4535] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" HandleID="k8s-pod-network.b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--s2djx-eth0" Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.640 [INFO][4535] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" HandleID="k8s-pod-network.b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--s2djx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdaf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", "pod":"coredns-66bc5c9577-s2djx", "timestamp":"2026-04-16 23:47:42.624522276 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003711e0)} Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.640 [INFO][4535] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.640 [INFO][4535] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.640 [INFO][4535] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a' Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.646 [INFO][4535] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.654 [INFO][4535] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.659 [INFO][4535] ipam/ipam.go 526: Trying affinity for 192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.662 [INFO][4535] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.665 [INFO][4535] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.665 [INFO][4535] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.0/26 handle="k8s-pod-network.b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.667 [INFO][4535] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86 Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.672 [INFO][4535] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.0/26 handle="k8s-pod-network.b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.681 [INFO][4535] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.4/26] block=192.168.65.0/26 handle="k8s-pod-network.b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.681 [INFO][4535] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.4/26] handle="k8s-pod-network.b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.681 [INFO][4535] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:47:42.714798 containerd[1563]: 2026-04-16 23:47:42.681 [INFO][4535] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.4/26] IPv6=[] ContainerID="b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" HandleID="k8s-pod-network.b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--s2djx-eth0" Apr 16 23:47:42.717307 containerd[1563]: 2026-04-16 23:47:42.683 [INFO][4523] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" Namespace="kube-system" Pod="coredns-66bc5c9577-s2djx" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--s2djx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--s2djx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ae053e49-d4f7-4e65-86ff-52aeb12ce78a", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"", Pod:"coredns-66bc5c9577-s2djx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ce7b981d69", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:42.717307 containerd[1563]: 2026-04-16 23:47:42.684 [INFO][4523] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.4/32] ContainerID="b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" Namespace="kube-system" Pod="coredns-66bc5c9577-s2djx" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--s2djx-eth0" Apr 16 23:47:42.717307 containerd[1563]: 2026-04-16 23:47:42.684 [INFO][4523] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ce7b981d69 ContainerID="b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" Namespace="kube-system" Pod="coredns-66bc5c9577-s2djx" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--s2djx-eth0" Apr 16 23:47:42.717307 containerd[1563]: 2026-04-16 23:47:42.691 [INFO][4523] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" Namespace="kube-system" Pod="coredns-66bc5c9577-s2djx" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--s2djx-eth0" Apr 16 23:47:42.718150 containerd[1563]: 2026-04-16 23:47:42.691 [INFO][4523] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" Namespace="kube-system" Pod="coredns-66bc5c9577-s2djx" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--s2djx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--s2djx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ae053e49-d4f7-4e65-86ff-52aeb12ce78a", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86", Pod:"coredns-66bc5c9577-s2djx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ce7b981d69", MAC:"66:e3:1d:9c:fd:8f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:42.718150 containerd[1563]: 2026-04-16 23:47:42.711 [INFO][4523] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" Namespace="kube-system" Pod="coredns-66bc5c9577-s2djx" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-coredns--66bc5c9577--s2djx-eth0" Apr 16 23:47:42.761914 containerd[1563]: time="2026-04-16T23:47:42.761799138Z" level=info msg="connecting to shim b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86" address="unix:///run/containerd/s/59ef18de788d0a26ef7ec340fadb4ac1662146e7d0db5f96f7f92d23f0f16115" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:47:42.821378 systemd[1]: Started cri-containerd-b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86.scope - libcontainer container b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86. Apr 16 23:47:42.895441 containerd[1563]: time="2026-04-16T23:47:42.895390820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s2djx,Uid:ae053e49-d4f7-4e65-86ff-52aeb12ce78a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86\"" Apr 16 23:47:42.901588 containerd[1563]: time="2026-04-16T23:47:42.901543859Z" level=info msg="CreateContainer within sandbox \"b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:47:42.915129 containerd[1563]: time="2026-04-16T23:47:42.913882669Z" level=info msg="Container ad53bdbff6521e588eaa01e26072259f0c945e557dca842d561e6bc059f0e518: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:42.925034 containerd[1563]: time="2026-04-16T23:47:42.924981702Z" level=info msg="CreateContainer within sandbox \"b8f8ab1c30adddbdf912a8268d26c8da163728680f79d848b3a58f8edf7d6b86\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad53bdbff6521e588eaa01e26072259f0c945e557dca842d561e6bc059f0e518\"" Apr 16 23:47:42.926172 containerd[1563]: time="2026-04-16T23:47:42.926125118Z" level=info msg="StartContainer for \"ad53bdbff6521e588eaa01e26072259f0c945e557dca842d561e6bc059f0e518\"" Apr 16 23:47:42.927612 containerd[1563]: time="2026-04-16T23:47:42.927579455Z" level=info msg="connecting to shim ad53bdbff6521e588eaa01e26072259f0c945e557dca842d561e6bc059f0e518" address="unix:///run/containerd/s/59ef18de788d0a26ef7ec340fadb4ac1662146e7d0db5f96f7f92d23f0f16115" protocol=ttrpc version=3 Apr 16 23:47:42.951335 systemd[1]: Started cri-containerd-ad53bdbff6521e588eaa01e26072259f0c945e557dca842d561e6bc059f0e518.scope - libcontainer container ad53bdbff6521e588eaa01e26072259f0c945e557dca842d561e6bc059f0e518. Apr 16 23:47:43.006451 containerd[1563]: time="2026-04-16T23:47:43.006401928Z" level=info msg="StartContainer for \"ad53bdbff6521e588eaa01e26072259f0c945e557dca842d561e6bc059f0e518\" returns successfully" Apr 16 23:47:43.785366 systemd-networkd[1427]: cali1ce7b981d69: Gained IPv6LL Apr 16 23:47:43.894753 kubelet[2803]: I0416 23:47:43.894615 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s2djx" podStartSLOduration=46.894594746 podStartE2EDuration="46.894594746s" podCreationTimestamp="2026-04-16 23:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:47:43.893960232 +0000 UTC m=+51.605144253" watchObservedRunningTime="2026-04-16 23:47:43.894594746 +0000 UTC m=+51.605778771" Apr 16 23:47:44.531748 containerd[1563]: time="2026-04-16T23:47:44.531679219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c69794945-pkspw,Uid:1ed38fd9-69a8-4273-aed3-ce32f3fe1e49,Namespace:calico-system,Attempt:0,}" Apr 16 23:47:44.534156 containerd[1563]: time="2026-04-16T23:47:44.534082883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-7d5sl,Uid:e4023aaa-91d8-4d6e-be91-82fbe65c18a7,Namespace:calico-system,Attempt:0,}" Apr 16 23:47:44.537191 containerd[1563]: time="2026-04-16T23:47:44.537143088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c69794945-s56n5,Uid:18374daf-2055-4b0b-8c9d-1ca849a297fc,Namespace:calico-system,Attempt:0,}" Apr 16 23:47:44.852679 systemd-networkd[1427]: cali20a238f93a2: Link UP Apr 16 23:47:44.855238 systemd-networkd[1427]: cali20a238f93a2: Gained carrier Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.664 [INFO][4669] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--s56n5-eth0 calico-apiserver-c69794945- calico-system 18374daf-2055-4b0b-8c9d-1ca849a297fc 835 0 2026-04-16 23:47:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c69794945 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a calico-apiserver-c69794945-s56n5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali20a238f93a2 [] [] }} ContainerID="5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" Namespace="calico-system" Pod="calico-apiserver-c69794945-s56n5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--s56n5-" Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.664 [INFO][4669] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" Namespace="calico-system" Pod="calico-apiserver-c69794945-s56n5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--s56n5-eth0" Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.757 [INFO][4697] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" HandleID="k8s-pod-network.5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--s56n5-eth0" Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.782 [INFO][4697] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" HandleID="k8s-pod-network.5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--s56n5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000353b90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", "pod":"calico-apiserver-c69794945-s56n5", "timestamp":"2026-04-16 23:47:44.757813296 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.782 [INFO][4697] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.782 [INFO][4697] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.782 [INFO][4697] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a' Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.787 [INFO][4697] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.796 [INFO][4697] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.807 [INFO][4697] ipam/ipam.go 526: Trying affinity for 192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.811 [INFO][4697] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.815 [INFO][4697] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.816 [INFO][4697] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.0/26 handle="k8s-pod-network.5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.820 [INFO][4697] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.826 [INFO][4697] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.0/26 handle="k8s-pod-network.5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.844 [INFO][4697] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.5/26] block=192.168.65.0/26 handle="k8s-pod-network.5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.844 [INFO][4697] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.5/26] handle="k8s-pod-network.5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.844 [INFO][4697] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:47:44.882234 containerd[1563]: 2026-04-16 23:47:44.844 [INFO][4697] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.5/26] IPv6=[] ContainerID="5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" HandleID="k8s-pod-network.5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--s56n5-eth0" Apr 16 23:47:44.883387 containerd[1563]: 2026-04-16 23:47:44.848 [INFO][4669] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" Namespace="calico-system" Pod="calico-apiserver-c69794945-s56n5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--s56n5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--s56n5-eth0", GenerateName:"calico-apiserver-c69794945-", Namespace:"calico-system", SelfLink:"", UID:"18374daf-2055-4b0b-8c9d-1ca849a297fc", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 47, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c69794945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"", Pod:"calico-apiserver-c69794945-s56n5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali20a238f93a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:44.883387 containerd[1563]: 2026-04-16 23:47:44.848 [INFO][4669] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.5/32] ContainerID="5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" Namespace="calico-system" Pod="calico-apiserver-c69794945-s56n5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--s56n5-eth0" Apr 16 23:47:44.883387 containerd[1563]: 2026-04-16 23:47:44.848 [INFO][4669] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20a238f93a2 ContainerID="5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" Namespace="calico-system" Pod="calico-apiserver-c69794945-s56n5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--s56n5-eth0" Apr 16 23:47:44.883387 containerd[1563]: 2026-04-16 23:47:44.855 [INFO][4669] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" Namespace="calico-system" Pod="calico-apiserver-c69794945-s56n5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--s56n5-eth0" Apr 16 23:47:44.883387 containerd[1563]: 2026-04-16 23:47:44.857 [INFO][4669] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" Namespace="calico-system" Pod="calico-apiserver-c69794945-s56n5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--s56n5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--s56n5-eth0", GenerateName:"calico-apiserver-c69794945-", Namespace:"calico-system", SelfLink:"", UID:"18374daf-2055-4b0b-8c9d-1ca849a297fc", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 47, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c69794945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa", Pod:"calico-apiserver-c69794945-s56n5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali20a238f93a2", MAC:"5e:4d:f9:41:08:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:44.883387 containerd[1563]: 2026-04-16 23:47:44.876 [INFO][4669] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" Namespace="calico-system" Pod="calico-apiserver-c69794945-s56n5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--s56n5-eth0" Apr 16 23:47:44.966258 containerd[1563]: time="2026-04-16T23:47:44.965886985Z" level=info msg="connecting to shim 5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa" address="unix:///run/containerd/s/0a7a7dd509750903aff8e247fb437b2379cf1eb21bc6b8047fc63c143a215a1c" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:47:44.999868 systemd-networkd[1427]: cali244ea70493e: Link UP Apr 16 23:47:45.003730 systemd-networkd[1427]: cali244ea70493e: Gained carrier Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.740 [INFO][4654] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--pkspw-eth0 calico-apiserver-c69794945- calico-system 1ed38fd9-69a8-4273-aed3-ce32f3fe1e49 836 0 2026-04-16 23:47:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c69794945 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a calico-apiserver-c69794945-pkspw eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali244ea70493e [] [] }} ContainerID="3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" Namespace="calico-system" Pod="calico-apiserver-c69794945-pkspw" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--pkspw-" Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.741 [INFO][4654] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" Namespace="calico-system" Pod="calico-apiserver-c69794945-pkspw" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--pkspw-eth0" Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.819 [INFO][4712] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" HandleID="k8s-pod-network.3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--pkspw-eth0" Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.843 [INFO][4712] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" HandleID="k8s-pod-network.3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--pkspw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", "pod":"calico-apiserver-c69794945-pkspw", "timestamp":"2026-04-16 23:47:44.819709977 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188000)} Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.843 [INFO][4712] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.844 [INFO][4712] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.845 [INFO][4712] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a' Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.892 [INFO][4712] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.908 [INFO][4712] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.919 [INFO][4712] ipam/ipam.go 526: Trying affinity for 192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.929 [INFO][4712] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.943 [INFO][4712] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.944 [INFO][4712] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.0/26 handle="k8s-pod-network.3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.948 [INFO][4712] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44 Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.960 [INFO][4712] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.0/26 handle="k8s-pod-network.3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.989 [INFO][4712] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.6/26] block=192.168.65.0/26 handle="k8s-pod-network.3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.990 [INFO][4712] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.6/26] handle="k8s-pod-network.3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.990 [INFO][4712] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:47:45.045332 containerd[1563]: 2026-04-16 23:47:44.990 [INFO][4712] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.6/26] IPv6=[] ContainerID="3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" HandleID="k8s-pod-network.3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--pkspw-eth0" Apr 16 23:47:45.049218 containerd[1563]: 2026-04-16 23:47:44.993 [INFO][4654] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" Namespace="calico-system" Pod="calico-apiserver-c69794945-pkspw" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--pkspw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--pkspw-eth0", GenerateName:"calico-apiserver-c69794945-", Namespace:"calico-system", SelfLink:"", UID:"1ed38fd9-69a8-4273-aed3-ce32f3fe1e49", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 47, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c69794945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"", Pod:"calico-apiserver-c69794945-pkspw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali244ea70493e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:45.049218 containerd[1563]: 2026-04-16 23:47:44.993 [INFO][4654] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.6/32] ContainerID="3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" Namespace="calico-system" Pod="calico-apiserver-c69794945-pkspw" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--pkspw-eth0" Apr 16 23:47:45.049218 containerd[1563]: 2026-04-16 23:47:44.993 [INFO][4654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali244ea70493e ContainerID="3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" Namespace="calico-system" Pod="calico-apiserver-c69794945-pkspw" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--pkspw-eth0" Apr 16 23:47:45.049218 containerd[1563]: 2026-04-16 23:47:45.006 [INFO][4654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" Namespace="calico-system" Pod="calico-apiserver-c69794945-pkspw" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--pkspw-eth0" Apr 16 23:47:45.049218 containerd[1563]: 2026-04-16 23:47:45.008 [INFO][4654] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" Namespace="calico-system" Pod="calico-apiserver-c69794945-pkspw" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--pkspw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--pkspw-eth0", GenerateName:"calico-apiserver-c69794945-", Namespace:"calico-system", SelfLink:"", UID:"1ed38fd9-69a8-4273-aed3-ce32f3fe1e49", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 47, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c69794945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44", Pod:"calico-apiserver-c69794945-pkspw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali244ea70493e", MAC:"f2:c2:45:14:9a:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:45.049218 containerd[1563]: 2026-04-16 23:47:45.026 [INFO][4654] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" Namespace="calico-system" Pod="calico-apiserver-c69794945-pkspw" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--apiserver--c69794945--pkspw-eth0" Apr 16 23:47:45.064160 systemd[1]: Started cri-containerd-5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa.scope - libcontainer container 5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa. Apr 16 23:47:45.125283 containerd[1563]: time="2026-04-16T23:47:45.124498633Z" level=info msg="connecting to shim 3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44" address="unix:///run/containerd/s/dc9c4dffe186b34b7b07ab28c307cedab0f739a36481e32bdba69ed8e1c88e0f" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:47:45.140182 systemd-networkd[1427]: cali1247c946883: Link UP Apr 16 23:47:45.144882 systemd-networkd[1427]: cali1247c946883: Gained carrier Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:44.731 [INFO][4662] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-goldmane--cccfbd5cf--7d5sl-eth0 goldmane-cccfbd5cf- calico-system e4023aaa-91d8-4d6e-be91-82fbe65c18a7 832 0 2026-04-16 23:47:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a goldmane-cccfbd5cf-7d5sl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1247c946883 [] [] }} ContainerID="3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7d5sl" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-goldmane--cccfbd5cf--7d5sl-" Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:44.731 [INFO][4662] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7d5sl" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-goldmane--cccfbd5cf--7d5sl-eth0" Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:44.832 [INFO][4710] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" HandleID="k8s-pod-network.3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-goldmane--cccfbd5cf--7d5sl-eth0" Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:44.846 [INFO][4710] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" HandleID="k8s-pod-network.3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-goldmane--cccfbd5cf--7d5sl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e7f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", "pod":"goldmane-cccfbd5cf-7d5sl", "timestamp":"2026-04-16 23:47:44.832908949 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000384c60)} Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:44.846 [INFO][4710] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:44.990 [INFO][4710] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:44.990 [INFO][4710] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a' Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:44.997 [INFO][4710] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:45.023 [INFO][4710] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:45.051 [INFO][4710] ipam/ipam.go 526: Trying affinity for 192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:45.062 [INFO][4710] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:45.071 [INFO][4710] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:45.072 [INFO][4710] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.0/26 handle="k8s-pod-network.3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:45.077 [INFO][4710] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666 Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:45.088 [INFO][4710] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.0/26 handle="k8s-pod-network.3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:45.117 [INFO][4710] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.7/26] block=192.168.65.0/26 handle="k8s-pod-network.3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:45.118 [INFO][4710] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.7/26] handle="k8s-pod-network.3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:45.118 [INFO][4710] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:47:45.187730 containerd[1563]: 2026-04-16 23:47:45.118 [INFO][4710] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.7/26] IPv6=[] ContainerID="3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" HandleID="k8s-pod-network.3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-goldmane--cccfbd5cf--7d5sl-eth0" Apr 16 23:47:45.188954 containerd[1563]: 2026-04-16 23:47:45.129 [INFO][4662] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7d5sl" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-goldmane--cccfbd5cf--7d5sl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-goldmane--cccfbd5cf--7d5sl-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e4023aaa-91d8-4d6e-be91-82fbe65c18a7", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 47, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"", Pod:"goldmane-cccfbd5cf-7d5sl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1247c946883", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:45.188954 containerd[1563]: 2026-04-16 23:47:45.130 [INFO][4662] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.7/32] ContainerID="3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7d5sl" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-goldmane--cccfbd5cf--7d5sl-eth0" Apr 16 23:47:45.188954 containerd[1563]: 2026-04-16 23:47:45.130 [INFO][4662] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1247c946883 ContainerID="3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7d5sl" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-goldmane--cccfbd5cf--7d5sl-eth0" Apr 16 23:47:45.188954 containerd[1563]: 2026-04-16 23:47:45.145 [INFO][4662] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7d5sl" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-goldmane--cccfbd5cf--7d5sl-eth0" Apr 16 23:47:45.188954 containerd[1563]: 2026-04-16 23:47:45.150 [INFO][4662] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7d5sl" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-goldmane--cccfbd5cf--7d5sl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-goldmane--cccfbd5cf--7d5sl-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e4023aaa-91d8-4d6e-be91-82fbe65c18a7", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 47, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666", Pod:"goldmane-cccfbd5cf-7d5sl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1247c946883", MAC:"86:4e:3b:bc:4d:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:45.188954 containerd[1563]: 2026-04-16 23:47:45.178 [INFO][4662] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7d5sl" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-goldmane--cccfbd5cf--7d5sl-eth0" Apr 16 23:47:45.212660 systemd[1]: Started cri-containerd-3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44.scope - libcontainer container 3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44. Apr 16 23:47:45.245430 containerd[1563]: time="2026-04-16T23:47:45.245361365Z" level=info msg="connecting to shim 3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666" address="unix:///run/containerd/s/ea373a167e49ef8ce614c51b9e945e0c17d898a9acfa40cbcd53c8a93ad2e283" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:47:45.309585 containerd[1563]: time="2026-04-16T23:47:45.309491694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c69794945-s56n5,Uid:18374daf-2055-4b0b-8c9d-1ca849a297fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa\"" Apr 16 23:47:45.313089 containerd[1563]: time="2026-04-16T23:47:45.313040379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 23:47:45.314644 systemd[1]: Started cri-containerd-3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666.scope - libcontainer container 3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666. Apr 16 23:47:45.427952 containerd[1563]: time="2026-04-16T23:47:45.427776984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c69794945-pkspw,Uid:1ed38fd9-69a8-4273-aed3-ce32f3fe1e49,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44\"" Apr 16 23:47:45.476469 containerd[1563]: time="2026-04-16T23:47:45.476404319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-7d5sl,Uid:e4023aaa-91d8-4d6e-be91-82fbe65c18a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666\"" Apr 16 23:47:45.530384 containerd[1563]: time="2026-04-16T23:47:45.530332656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8654597d95-sdzr5,Uid:8945b8a7-e39d-459e-b827-9b525646bee6,Namespace:calico-system,Attempt:0,}" Apr 16 23:47:45.690620 systemd-networkd[1427]: cali98c678bfe20: Link UP Apr 16 23:47:45.692836 systemd-networkd[1427]: cali98c678bfe20: Gained carrier Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.595 [INFO][4903] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--kube--controllers--8654597d95--sdzr5-eth0 calico-kube-controllers-8654597d95- calico-system 8945b8a7-e39d-459e-b827-9b525646bee6 831 0 2026-04-16 23:47:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8654597d95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a calico-kube-controllers-8654597d95-sdzr5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali98c678bfe20 [] [] }} ContainerID="2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" Namespace="calico-system" Pod="calico-kube-controllers-8654597d95-sdzr5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--kube--controllers--8654597d95--sdzr5-" Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.595 [INFO][4903] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" Namespace="calico-system" Pod="calico-kube-controllers-8654597d95-sdzr5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--kube--controllers--8654597d95--sdzr5-eth0" Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.636 [INFO][4915] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" HandleID="k8s-pod-network.2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--kube--controllers--8654597d95--sdzr5-eth0" Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.646 [INFO][4915] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" HandleID="k8s-pod-network.2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--kube--controllers--8654597d95--sdzr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103820), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", "pod":"calico-kube-controllers-8654597d95-sdzr5", "timestamp":"2026-04-16 23:47:45.636557432 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002ca420)} Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.646 [INFO][4915] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.646 [INFO][4915] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.646 [INFO][4915] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a' Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.649 [INFO][4915] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.655 [INFO][4915] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.661 [INFO][4915] ipam/ipam.go 526: Trying affinity for 192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.663 [INFO][4915] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.666 [INFO][4915] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.0/26 host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.666 [INFO][4915] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.0/26 handle="k8s-pod-network.2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.668 [INFO][4915] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5 Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.674 [INFO][4915] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.0/26 handle="k8s-pod-network.2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.683 [INFO][4915] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.8/26] block=192.168.65.0/26 handle="k8s-pod-network.2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.683 [INFO][4915] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.8/26] handle="k8s-pod-network.2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" host="ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a" Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.683 [INFO][4915] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:47:45.712797 containerd[1563]: 2026-04-16 23:47:45.683 [INFO][4915] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.8/26] IPv6=[] ContainerID="2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" HandleID="k8s-pod-network.2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" Workload="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--kube--controllers--8654597d95--sdzr5-eth0" Apr 16 23:47:45.716947 containerd[1563]: 2026-04-16 23:47:45.687 [INFO][4903] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" Namespace="calico-system" Pod="calico-kube-controllers-8654597d95-sdzr5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--kube--controllers--8654597d95--sdzr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--kube--controllers--8654597d95--sdzr5-eth0", GenerateName:"calico-kube-controllers-8654597d95-", Namespace:"calico-system", SelfLink:"", UID:"8945b8a7-e39d-459e-b827-9b525646bee6", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8654597d95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"", Pod:"calico-kube-controllers-8654597d95-sdzr5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali98c678bfe20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:45.716947 containerd[1563]: 2026-04-16 23:47:45.687 [INFO][4903] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.8/32] ContainerID="2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" Namespace="calico-system" Pod="calico-kube-controllers-8654597d95-sdzr5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--kube--controllers--8654597d95--sdzr5-eth0" Apr 16 23:47:45.716947 containerd[1563]: 2026-04-16 23:47:45.687 [INFO][4903] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98c678bfe20 ContainerID="2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" Namespace="calico-system" Pod="calico-kube-controllers-8654597d95-sdzr5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--kube--controllers--8654597d95--sdzr5-eth0" Apr 16 23:47:45.716947 containerd[1563]: 2026-04-16 23:47:45.691 [INFO][4903] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" Namespace="calico-system" Pod="calico-kube-controllers-8654597d95-sdzr5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--kube--controllers--8654597d95--sdzr5-eth0" Apr 16 23:47:45.716947 containerd[1563]: 2026-04-16 23:47:45.692 [INFO][4903] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" Namespace="calico-system" Pod="calico-kube-controllers-8654597d95-sdzr5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--kube--controllers--8654597d95--sdzr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--kube--controllers--8654597d95--sdzr5-eth0", GenerateName:"calico-kube-controllers-8654597d95-", Namespace:"calico-system", SelfLink:"", UID:"8945b8a7-e39d-459e-b827-9b525646bee6", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8654597d95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260416-2100-13e0b3c282f039e02c5a", ContainerID:"2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5", Pod:"calico-kube-controllers-8654597d95-sdzr5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali98c678bfe20", MAC:"6a:a4:9a:e1:fa:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:47:45.716947 containerd[1563]: 2026-04-16 23:47:45.708 [INFO][4903] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" Namespace="calico-system" Pod="calico-kube-controllers-8654597d95-sdzr5" WorkloadEndpoint="ci--4459--2--4--nightly--20260416--2100--13e0b3c282f039e02c5a-k8s-calico--kube--controllers--8654597d95--sdzr5-eth0" Apr 16 23:47:45.760741 containerd[1563]: time="2026-04-16T23:47:45.760631985Z" level=info msg="connecting to shim 2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5" address="unix:///run/containerd/s/5a72241752387a748b6276ffb63eb2121e4f1edac7d2d8d1cb6c9644b43ff8cc" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:47:45.826657 systemd[1]: Started cri-containerd-2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5.scope - libcontainer container 2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5. Apr 16 23:47:45.906124 containerd[1563]: time="2026-04-16T23:47:45.905961900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8654597d95-sdzr5,Uid:8945b8a7-e39d-459e-b827-9b525646bee6,Namespace:calico-system,Attempt:0,} returns sandbox id \"2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5\"" Apr 16 23:47:46.025400 systemd-networkd[1427]: cali20a238f93a2: Gained IPv6LL Apr 16 23:47:46.280866 systemd-networkd[1427]: cali244ea70493e: Gained IPv6LL Apr 16 23:47:46.408335 systemd-networkd[1427]: cali1247c946883: Gained IPv6LL Apr 16 23:47:47.689116 systemd-networkd[1427]: cali98c678bfe20: Gained IPv6LL Apr 16 23:47:47.713133 containerd[1563]: time="2026-04-16T23:47:47.712509633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:47.715676 containerd[1563]: time="2026-04-16T23:47:47.715636583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 16 23:47:47.716988 containerd[1563]: time="2026-04-16T23:47:47.716916259Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:47.722333 containerd[1563]: time="2026-04-16T23:47:47.722292255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:47.723304 containerd[1563]: time="2026-04-16T23:47:47.723267854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.409821111s" Apr 16 23:47:47.723468 containerd[1563]: time="2026-04-16T23:47:47.723440920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 23:47:47.727033 containerd[1563]: time="2026-04-16T23:47:47.725494452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 23:47:47.730179 containerd[1563]: time="2026-04-16T23:47:47.730142993Z" level=info msg="CreateContainer within sandbox \"5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:47:47.741068 containerd[1563]: time="2026-04-16T23:47:47.739304111Z" level=info msg="Container 9587d4187f9a10977f39e206ac0274f4897d36172a4385d4679189076f6f481f: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:47.767473 containerd[1563]: time="2026-04-16T23:47:47.767426769Z" level=info msg="CreateContainer within sandbox \"5eff760af15a12cfb712e18504ceb8e53ee47046ee09682d6daa4afdfacd9bfa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9587d4187f9a10977f39e206ac0274f4897d36172a4385d4679189076f6f481f\"" Apr 16 23:47:47.768641 containerd[1563]: time="2026-04-16T23:47:47.768608837Z" level=info msg="StartContainer for \"9587d4187f9a10977f39e206ac0274f4897d36172a4385d4679189076f6f481f\"" Apr 16 23:47:47.771239 containerd[1563]: time="2026-04-16T23:47:47.771155348Z" level=info msg="connecting to shim 9587d4187f9a10977f39e206ac0274f4897d36172a4385d4679189076f6f481f" address="unix:///run/containerd/s/0a7a7dd509750903aff8e247fb437b2379cf1eb21bc6b8047fc63c143a215a1c" protocol=ttrpc version=3 Apr 16 23:47:47.807331 systemd[1]: Started cri-containerd-9587d4187f9a10977f39e206ac0274f4897d36172a4385d4679189076f6f481f.scope - libcontainer container 9587d4187f9a10977f39e206ac0274f4897d36172a4385d4679189076f6f481f. Apr 16 23:47:47.884516 containerd[1563]: time="2026-04-16T23:47:47.884362556Z" level=info msg="StartContainer for \"9587d4187f9a10977f39e206ac0274f4897d36172a4385d4679189076f6f481f\" returns successfully" Apr 16 23:47:47.943261 containerd[1563]: time="2026-04-16T23:47:47.940924612Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:47.943261 containerd[1563]: time="2026-04-16T23:47:47.942623389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 16 23:47:47.949506 containerd[1563]: time="2026-04-16T23:47:47.949463359Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 223.903593ms" Apr 16 23:47:47.949669 containerd[1563]: time="2026-04-16T23:47:47.949642593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 23:47:47.956602 containerd[1563]: time="2026-04-16T23:47:47.956523344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 23:47:47.962428 containerd[1563]: time="2026-04-16T23:47:47.962386587Z" level=info msg="CreateContainer within sandbox \"3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:47:47.980774 containerd[1563]: time="2026-04-16T23:47:47.978418851Z" level=info msg="Container 90cf5048c02e7647dc9ae610d252ad84d44b45ee4e4293d5bcb288256ead636d: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:47.992791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208061256.mount: Deactivated successfully. Apr 16 23:47:48.002395 containerd[1563]: time="2026-04-16T23:47:48.001451968Z" level=info msg="CreateContainer within sandbox \"3d03dd0fdac3d4a790bed0c4bcb653aaa52dc8f34af3bcd9b417cab3d35e9d44\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"90cf5048c02e7647dc9ae610d252ad84d44b45ee4e4293d5bcb288256ead636d\"" Apr 16 23:47:48.005428 containerd[1563]: time="2026-04-16T23:47:48.005374991Z" level=info msg="StartContainer for \"90cf5048c02e7647dc9ae610d252ad84d44b45ee4e4293d5bcb288256ead636d\"" Apr 16 23:47:48.008271 containerd[1563]: time="2026-04-16T23:47:48.008209760Z" level=info msg="connecting to shim 90cf5048c02e7647dc9ae610d252ad84d44b45ee4e4293d5bcb288256ead636d" address="unix:///run/containerd/s/dc9c4dffe186b34b7b07ab28c307cedab0f739a36481e32bdba69ed8e1c88e0f" protocol=ttrpc version=3 Apr 16 23:47:48.060564 systemd[1]: Started cri-containerd-90cf5048c02e7647dc9ae610d252ad84d44b45ee4e4293d5bcb288256ead636d.scope - libcontainer container 90cf5048c02e7647dc9ae610d252ad84d44b45ee4e4293d5bcb288256ead636d. Apr 16 23:47:48.152614 containerd[1563]: time="2026-04-16T23:47:48.152569933Z" level=info msg="StartContainer for \"90cf5048c02e7647dc9ae610d252ad84d44b45ee4e4293d5bcb288256ead636d\" returns successfully" Apr 16 23:47:49.058217 kubelet[2803]: I0416 23:47:49.056936 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-c69794945-s56n5" podStartSLOduration=37.644125065 podStartE2EDuration="40.056911788s" podCreationTimestamp="2026-04-16 23:47:09 +0000 UTC" firstStartedPulling="2026-04-16 23:47:45.311977532 +0000 UTC m=+53.023161530" lastFinishedPulling="2026-04-16 23:47:47.724764237 +0000 UTC m=+55.435948253" observedRunningTime="2026-04-16 23:47:47.95260117 +0000 UTC m=+55.663785191" watchObservedRunningTime="2026-04-16 23:47:49.056911788 +0000 UTC m=+56.768095811" Apr 16 23:47:49.847709 kubelet[2803]: I0416 23:47:49.847620 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-c69794945-pkspw" podStartSLOduration=38.323979154 podStartE2EDuration="40.847595019s" podCreationTimestamp="2026-04-16 23:47:09 +0000 UTC" firstStartedPulling="2026-04-16 23:47:45.430831189 +0000 UTC m=+53.142015193" lastFinishedPulling="2026-04-16 23:47:47.95444704 +0000 UTC m=+55.665631058" observedRunningTime="2026-04-16 23:47:49.063222652 +0000 UTC m=+56.774406674" watchObservedRunningTime="2026-04-16 23:47:49.847595019 +0000 UTC m=+57.558779041" Apr 16 23:47:50.030915 kubelet[2803]: I0416 23:47:50.030858 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:47:50.512069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1063754067.mount: Deactivated successfully. Apr 16 23:47:50.626488 ntpd[1665]: Listen normally on 10 cali71ff4bdb603 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 16 23:47:50.626578 ntpd[1665]: Listen normally on 11 cali1ce7b981d69 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 16 23:47:50.628216 ntpd[1665]: 16 Apr 23:47:50 ntpd[1665]: Listen normally on 10 cali71ff4bdb603 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 16 23:47:50.628216 ntpd[1665]: 16 Apr 23:47:50 ntpd[1665]: Listen normally on 11 cali1ce7b981d69 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 16 23:47:50.628216 ntpd[1665]: 16 Apr 23:47:50 ntpd[1665]: Listen normally on 12 cali20a238f93a2 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 16 23:47:50.628216 ntpd[1665]: 16 Apr 23:47:50 ntpd[1665]: Listen normally on 13 cali244ea70493e [fe80::ecee:eeff:feee:eeee%12]:123 Apr 16 23:47:50.628216 ntpd[1665]: 16 Apr 23:47:50 ntpd[1665]: Listen normally on 14 cali1247c946883 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 16 23:47:50.628216 ntpd[1665]: 16 Apr 23:47:50 ntpd[1665]: Listen normally on 15 cali98c678bfe20 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 16 23:47:50.626618 ntpd[1665]: Listen normally on 12 cali20a238f93a2 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 16 23:47:50.626658 ntpd[1665]: Listen normally on 13 cali244ea70493e [fe80::ecee:eeff:feee:eeee%12]:123 Apr 16 23:47:50.626697 ntpd[1665]: Listen normally on 14 cali1247c946883 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 16 23:47:50.626736 ntpd[1665]: Listen normally on 15 cali98c678bfe20 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 16 23:47:51.569809 containerd[1563]: time="2026-04-16T23:47:51.569745863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:51.571239 containerd[1563]: time="2026-04-16T23:47:51.571186922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 16 23:47:51.572582 containerd[1563]: time="2026-04-16T23:47:51.572515489Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:51.575607 containerd[1563]: time="2026-04-16T23:47:51.575541056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:51.576818 containerd[1563]: time="2026-04-16T23:47:51.576644922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.618372003s" Apr 16 23:47:51.576818 containerd[1563]: time="2026-04-16T23:47:51.576688033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 16 23:47:51.578981 containerd[1563]: time="2026-04-16T23:47:51.578731806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 23:47:51.583573 containerd[1563]: time="2026-04-16T23:47:51.583534079Z" level=info msg="CreateContainer within sandbox \"3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 23:47:51.595118 containerd[1563]: time="2026-04-16T23:47:51.593315004Z" level=info msg="Container 4c0cf4bb2ce866e8602ef097ed888cc6e20292526c5ff353eae8703835ef9504: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:51.618478 containerd[1563]: time="2026-04-16T23:47:51.618308777Z" level=info msg="CreateContainer within sandbox \"3d1b9b43ec7a40ef203173747b597376f4b6f10894c7047586121ec9c92ef666\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4c0cf4bb2ce866e8602ef097ed888cc6e20292526c5ff353eae8703835ef9504\"" Apr 16 23:47:51.624146 containerd[1563]: time="2026-04-16T23:47:51.624088803Z" level=info msg="StartContainer for \"4c0cf4bb2ce866e8602ef097ed888cc6e20292526c5ff353eae8703835ef9504\"" Apr 16 23:47:51.628982 containerd[1563]: time="2026-04-16T23:47:51.628944235Z" level=info msg="connecting to shim 4c0cf4bb2ce866e8602ef097ed888cc6e20292526c5ff353eae8703835ef9504" address="unix:///run/containerd/s/ea373a167e49ef8ce614c51b9e945e0c17d898a9acfa40cbcd53c8a93ad2e283" protocol=ttrpc version=3 Apr 16 23:47:51.664283 systemd[1]: Started cri-containerd-4c0cf4bb2ce866e8602ef097ed888cc6e20292526c5ff353eae8703835ef9504.scope - libcontainer container 4c0cf4bb2ce866e8602ef097ed888cc6e20292526c5ff353eae8703835ef9504. Apr 16 23:47:51.737031 containerd[1563]: time="2026-04-16T23:47:51.736963051Z" level=info msg="StartContainer for \"4c0cf4bb2ce866e8602ef097ed888cc6e20292526c5ff353eae8703835ef9504\" returns successfully" Apr 16 23:47:52.077543 kubelet[2803]: I0416 23:47:52.076903 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-7d5sl" podStartSLOduration=36.978249978 podStartE2EDuration="43.076879175s" podCreationTimestamp="2026-04-16 23:47:09 +0000 UTC" firstStartedPulling="2026-04-16 23:47:45.479235443 +0000 UTC m=+53.190419459" lastFinishedPulling="2026-04-16 23:47:51.577864657 +0000 UTC m=+59.289048656" observedRunningTime="2026-04-16 23:47:52.069592535 +0000 UTC m=+59.780776559" watchObservedRunningTime="2026-04-16 23:47:52.076879175 +0000 UTC m=+59.788063198" Apr 16 23:47:54.225976 containerd[1563]: time="2026-04-16T23:47:54.225912313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:54.227380 containerd[1563]: time="2026-04-16T23:47:54.227322141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 16 23:47:54.228790 containerd[1563]: time="2026-04-16T23:47:54.228716598Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:54.233035 containerd[1563]: time="2026-04-16T23:47:54.232949674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:47:54.234112 containerd[1563]: time="2026-04-16T23:47:54.233929231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.655148619s" Apr 16 23:47:54.234112 containerd[1563]: time="2026-04-16T23:47:54.233973068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 16 23:47:54.262628 containerd[1563]: time="2026-04-16T23:47:54.262572113Z" level=info msg="CreateContainer within sandbox \"2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 23:47:54.274033 containerd[1563]: time="2026-04-16T23:47:54.273994013Z" level=info msg="Container 8a515bc815242a78a573ceb39d76037f0703788c88a3e8aca6b4635ad21a41ab: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:47:54.291897 containerd[1563]: time="2026-04-16T23:47:54.291845600Z" level=info msg="CreateContainer within sandbox \"2396a525466e459eaf69d08f53b0fe3c324d220010dc46d2ca8fb4e02809efc5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8a515bc815242a78a573ceb39d76037f0703788c88a3e8aca6b4635ad21a41ab\"" Apr 16 23:47:54.294358 containerd[1563]: time="2026-04-16T23:47:54.293070472Z" level=info msg="StartContainer for \"8a515bc815242a78a573ceb39d76037f0703788c88a3e8aca6b4635ad21a41ab\"" Apr 16 23:47:54.295451 containerd[1563]: time="2026-04-16T23:47:54.295401049Z" level=info msg="connecting to shim 8a515bc815242a78a573ceb39d76037f0703788c88a3e8aca6b4635ad21a41ab" address="unix:///run/containerd/s/5a72241752387a748b6276ffb63eb2121e4f1edac7d2d8d1cb6c9644b43ff8cc" protocol=ttrpc version=3 Apr 16 23:47:54.325285 systemd[1]: Started cri-containerd-8a515bc815242a78a573ceb39d76037f0703788c88a3e8aca6b4635ad21a41ab.scope - libcontainer container 8a515bc815242a78a573ceb39d76037f0703788c88a3e8aca6b4635ad21a41ab. Apr 16 23:47:54.402242 containerd[1563]: time="2026-04-16T23:47:54.402072868Z" level=info msg="StartContainer for \"8a515bc815242a78a573ceb39d76037f0703788c88a3e8aca6b4635ad21a41ab\" returns successfully" Apr 16 23:47:55.082969 kubelet[2803]: I0416 23:47:55.082843 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8654597d95-sdzr5" podStartSLOduration=36.754761119 podStartE2EDuration="45.082816259s" podCreationTimestamp="2026-04-16 23:47:10 +0000 UTC" firstStartedPulling="2026-04-16 23:47:45.90751801 +0000 UTC m=+53.618702024" lastFinishedPulling="2026-04-16 23:47:54.235573163 +0000 UTC m=+61.946757164" observedRunningTime="2026-04-16 23:47:55.080873837 +0000 UTC m=+62.792057858" watchObservedRunningTime="2026-04-16 23:47:55.082816259 +0000 UTC m=+62.794000279" Apr 16 23:48:01.258541 kubelet[2803]: I0416 23:48:01.258232 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:48:01.492458 systemd[1]: Started sshd@7-10.128.0.50:22-50.85.169.122:53058.service - OpenSSH per-connection server daemon (50.85.169.122:53058). Apr 16 23:48:02.105539 sshd[5335]: Accepted publickey for core from 50.85.169.122 port 53058 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:48:02.107286 sshd-session[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:48:02.114582 systemd-logind[1534]: New session 8 of user core. Apr 16 23:48:02.122317 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 23:48:02.539125 sshd[5338]: Connection closed by 50.85.169.122 port 53058 Apr 16 23:48:02.539996 sshd-session[5335]: pam_unix(sshd:session): session closed for user core Apr 16 23:48:02.546208 systemd[1]: sshd@7-10.128.0.50:22-50.85.169.122:53058.service: Deactivated successfully. Apr 16 23:48:02.549480 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 23:48:02.551408 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. Apr 16 23:48:02.553997 systemd-logind[1534]: Removed session 8. Apr 16 23:48:07.661793 systemd[1]: Started sshd@8-10.128.0.50:22-50.85.169.122:53072.service - OpenSSH per-connection server daemon (50.85.169.122:53072). Apr 16 23:48:08.253366 sshd[5381]: Accepted publickey for core from 50.85.169.122 port 53072 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:48:08.255212 sshd-session[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:48:08.262263 systemd-logind[1534]: New session 9 of user core. Apr 16 23:48:08.268313 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 23:48:08.663762 sshd[5384]: Connection closed by 50.85.169.122 port 53072 Apr 16 23:48:08.664750 sshd-session[5381]: pam_unix(sshd:session): session closed for user core Apr 16 23:48:08.671088 systemd[1]: sshd@8-10.128.0.50:22-50.85.169.122:53072.service: Deactivated successfully. Apr 16 23:48:08.674422 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 23:48:08.676225 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. Apr 16 23:48:08.678800 systemd-logind[1534]: Removed session 9. Apr 16 23:48:13.783073 systemd[1]: Started sshd@9-10.128.0.50:22-50.85.169.122:42188.service - OpenSSH per-connection server daemon (50.85.169.122:42188). Apr 16 23:48:14.378135 sshd[5402]: Accepted publickey for core from 50.85.169.122 port 42188 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:48:14.379975 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:48:14.387778 systemd-logind[1534]: New session 10 of user core. Apr 16 23:48:14.390305 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 23:48:14.827460 sshd[5405]: Connection closed by 50.85.169.122 port 42188 Apr 16 23:48:14.829370 sshd-session[5402]: pam_unix(sshd:session): session closed for user core Apr 16 23:48:14.835268 systemd[1]: sshd@9-10.128.0.50:22-50.85.169.122:42188.service: Deactivated successfully. Apr 16 23:48:14.838843 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 23:48:14.841004 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. Apr 16 23:48:14.842568 systemd-logind[1534]: Removed session 10. Apr 16 23:48:19.946987 systemd[1]: Started sshd@10-10.128.0.50:22-50.85.169.122:59214.service - OpenSSH per-connection server daemon (50.85.169.122:59214). Apr 16 23:48:20.538012 sshd[5450]: Accepted publickey for core from 50.85.169.122 port 59214 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:48:20.539736 sshd-session[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:48:20.547437 systemd-logind[1534]: New session 11 of user core. Apr 16 23:48:20.552285 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 23:48:20.942503 sshd[5453]: Connection closed by 50.85.169.122 port 59214 Apr 16 23:48:20.944587 sshd-session[5450]: pam_unix(sshd:session): session closed for user core Apr 16 23:48:20.950377 systemd[1]: sshd@10-10.128.0.50:22-50.85.169.122:59214.service: Deactivated successfully. Apr 16 23:48:20.954140 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 23:48:20.955887 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. Apr 16 23:48:20.958067 systemd-logind[1534]: Removed session 11. Apr 16 23:48:21.061987 systemd[1]: Started sshd@11-10.128.0.50:22-50.85.169.122:59222.service - OpenSSH per-connection server daemon (50.85.169.122:59222). Apr 16 23:48:21.652321 sshd[5466]: Accepted publickey for core from 50.85.169.122 port 59222 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:48:21.653838 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:48:21.661736 systemd-logind[1534]: New session 12 of user core. Apr 16 23:48:21.666308 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 23:48:22.101727 sshd[5485]: Connection closed by 50.85.169.122 port 59222 Apr 16 23:48:22.102598 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Apr 16 23:48:22.110345 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. Apr 16 23:48:22.113054 systemd[1]: sshd@11-10.128.0.50:22-50.85.169.122:59222.service: Deactivated successfully. Apr 16 23:48:22.118636 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 23:48:22.126738 systemd-logind[1534]: Removed session 12. Apr 16 23:48:22.227540 systemd[1]: Started sshd@12-10.128.0.50:22-50.85.169.122:59232.service - OpenSSH per-connection server daemon (50.85.169.122:59232). Apr 16 23:48:22.829801 sshd[5495]: Accepted publickey for core from 50.85.169.122 port 59232 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:48:22.831549 sshd-session[5495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:48:22.838955 systemd-logind[1534]: New session 13 of user core. Apr 16 23:48:22.847289 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 23:48:23.267836 sshd[5498]: Connection closed by 50.85.169.122 port 59232 Apr 16 23:48:23.268824 sshd-session[5495]: pam_unix(sshd:session): session closed for user core Apr 16 23:48:23.275273 systemd[1]: sshd@12-10.128.0.50:22-50.85.169.122:59232.service: Deactivated successfully. Apr 16 23:48:23.279241 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 23:48:23.280748 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. Apr 16 23:48:23.283628 systemd-logind[1534]: Removed session 13. Apr 16 23:48:28.388981 systemd[1]: Started sshd@13-10.128.0.50:22-50.85.169.122:59244.service - OpenSSH per-connection server daemon (50.85.169.122:59244). Apr 16 23:48:28.976153 sshd[5564]: Accepted publickey for core from 50.85.169.122 port 59244 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:48:28.977207 sshd-session[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:48:28.984135 systemd-logind[1534]: New session 14 of user core. Apr 16 23:48:28.991331 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 23:48:29.376713 sshd[5569]: Connection closed by 50.85.169.122 port 59244 Apr 16 23:48:29.378371 sshd-session[5564]: pam_unix(sshd:session): session closed for user core Apr 16 23:48:29.383830 systemd[1]: sshd@13-10.128.0.50:22-50.85.169.122:59244.service: Deactivated successfully. Apr 16 23:48:29.387371 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 23:48:29.388761 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. Apr 16 23:48:29.391569 systemd-logind[1534]: Removed session 14. Apr 16 23:48:29.498476 systemd[1]: Started sshd@14-10.128.0.50:22-50.85.169.122:59246.service - OpenSSH per-connection server daemon (50.85.169.122:59246). Apr 16 23:48:30.086503 sshd[5581]: Accepted publickey for core from 50.85.169.122 port 59246 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:48:30.088229 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:48:30.094898 systemd-logind[1534]: New session 15 of user core. Apr 16 23:48:30.101319 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 23:48:30.560293 sshd[5584]: Connection closed by 50.85.169.122 port 59246 Apr 16 23:48:30.562299 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Apr 16 23:48:30.568005 systemd[1]: sshd@14-10.128.0.50:22-50.85.169.122:59246.service: Deactivated successfully. Apr 16 23:48:30.571922 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 23:48:30.573273 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. Apr 16 23:48:30.575481 systemd-logind[1534]: Removed session 15. Apr 16 23:48:30.684761 systemd[1]: Started sshd@15-10.128.0.50:22-50.85.169.122:52488.service - OpenSSH per-connection server daemon (50.85.169.122:52488). Apr 16 23:48:31.294152 sshd[5593]: Accepted publickey for core from 50.85.169.122 port 52488 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:48:31.295268 sshd-session[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:48:31.301948 systemd-logind[1534]: New session 16 of user core. Apr 16 23:48:31.307294 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 23:48:32.349935 sshd[5596]: Connection closed by 50.85.169.122 port 52488 Apr 16 23:48:32.352353 sshd-session[5593]: pam_unix(sshd:session): session closed for user core Apr 16 23:48:32.358800 systemd[1]: sshd@15-10.128.0.50:22-50.85.169.122:52488.service: Deactivated successfully. Apr 16 23:48:32.362269 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 23:48:32.363768 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. Apr 16 23:48:32.367011 systemd-logind[1534]: Removed session 16. Apr 16 23:48:32.471869 systemd[1]: Started sshd@16-10.128.0.50:22-50.85.169.122:52498.service - OpenSSH per-connection server daemon (50.85.169.122:52498). Apr 16 23:48:33.080269 sshd[5619]: Accepted publickey for core from 50.85.169.122 port 52498 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:48:33.081872 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:48:33.089176 systemd-logind[1534]: New session 17 of user core. Apr 16 23:48:33.094294 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 23:48:33.692424 sshd[5622]: Connection closed by 50.85.169.122 port 52498 Apr 16 23:48:33.693614 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Apr 16 23:48:33.700311 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. Apr 16 23:48:33.701226 systemd[1]: sshd@16-10.128.0.50:22-50.85.169.122:52498.service: Deactivated successfully. Apr 16 23:48:33.705030 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 23:48:33.708133 systemd-logind[1534]: Removed session 17. Apr 16 23:48:33.810424 systemd[1]: Started sshd@17-10.128.0.50:22-50.85.169.122:52500.service - OpenSSH per-connection server daemon (50.85.169.122:52500). Apr 16 23:48:34.405582 sshd[5659]: Accepted publickey for core from 50.85.169.122 port 52500 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:48:34.406632 sshd-session[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:48:34.414182 systemd-logind[1534]: New session 18 of user core. Apr 16 23:48:34.420302 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 23:48:34.807997 sshd[5662]: Connection closed by 50.85.169.122 port 52500 Apr 16 23:48:34.808923 sshd-session[5659]: pam_unix(sshd:session): session closed for user core Apr 16 23:48:34.815065 systemd[1]: sshd@17-10.128.0.50:22-50.85.169.122:52500.service: Deactivated successfully. Apr 16 23:48:34.818527 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 23:48:34.820213 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. Apr 16 23:48:34.822827 systemd-logind[1534]: Removed session 18. Apr 16 23:48:39.926841 systemd[1]: Started sshd@18-10.128.0.50:22-50.85.169.122:40470.service - OpenSSH per-connection server daemon (50.85.169.122:40470). Apr 16 23:48:40.519640 sshd[5677]: Accepted publickey for core from 50.85.169.122 port 40470 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:48:40.521418 sshd-session[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:48:40.530910 systemd-logind[1534]: New session 19 of user core. Apr 16 23:48:40.534306 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 23:48:40.925159 sshd[5680]: Connection closed by 50.85.169.122 port 40470 Apr 16 23:48:40.926120 sshd-session[5677]: pam_unix(sshd:session): session closed for user core Apr 16 23:48:40.932638 systemd[1]: sshd@18-10.128.0.50:22-50.85.169.122:40470.service: Deactivated successfully. Apr 16 23:48:40.935936 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 23:48:40.937791 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. Apr 16 23:48:40.940671 systemd-logind[1534]: Removed session 19. Apr 16 23:48:46.048495 systemd[1]: Started sshd@19-10.128.0.50:22-50.85.169.122:40472.service - OpenSSH per-connection server daemon (50.85.169.122:40472). Apr 16 23:48:46.640220 sshd[5693]: Accepted publickey for core from 50.85.169.122 port 40472 ssh2: RSA SHA256:i0NwTsQFCPdTyeRJQBOezc2930h8nI0QDSjvPtldcVw Apr 16 23:48:46.641946 sshd-session[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:48:46.649283 systemd-logind[1534]: New session 20 of user core. Apr 16 23:48:46.655342 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 23:48:47.040469 sshd[5696]: Connection closed by 50.85.169.122 port 40472 Apr 16 23:48:47.042452 sshd-session[5693]: pam_unix(sshd:session): session closed for user core Apr 16 23:48:47.050477 systemd[1]: sshd@19-10.128.0.50:22-50.85.169.122:40472.service: Deactivated successfully. Apr 16 23:48:47.053672 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 23:48:47.056322 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. Apr 16 23:48:47.058296 systemd-logind[1534]: Removed session 20.