Mar 7 01:03:08.111322 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:03:08.111369 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:03:08.111388 kernel: BIOS-provided physical RAM map: Mar 7 01:03:08.111402 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Mar 7 01:03:08.111416 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Mar 7 01:03:08.111430 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Mar 7 01:03:08.111447 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Mar 7 01:03:08.111466 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Mar 7 01:03:08.111481 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Mar 7 01:03:08.111495 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Mar 7 01:03:08.111510 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Mar 7 01:03:08.111525 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Mar 7 01:03:08.111539 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Mar 7 01:03:08.111555 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Mar 7 01:03:08.111577 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Mar 7 01:03:08.111594 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Mar 7 01:03:08.111611 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Mar 7 01:03:08.111628 kernel: NX (Execute Disable) protection: active Mar 7 01:03:08.111644 kernel: APIC: Static calls initialized Mar 7 01:03:08.111660 kernel: efi: EFI v2.7 by EDK II Mar 7 01:03:08.111677 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Mar 7 01:03:08.111694 kernel: SMBIOS 2.4 present. Mar 7 01:03:08.111711 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Mar 7 01:03:08.111728 kernel: Hypervisor detected: KVM Mar 7 01:03:08.111748 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:03:08.111765 kernel: kvm-clock: using sched offset of 12621608291 cycles Mar 7 01:03:08.111781 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:03:08.111796 kernel: tsc: Detected 2299.998 MHz processor Mar 7 01:03:08.111813 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:03:08.111830 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:03:08.111847 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Mar 7 01:03:08.111864 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Mar 7 01:03:08.111881 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:03:08.111901 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 7 01:03:08.111916 kernel: Using GB pages for direct mapping Mar 7 01:03:08.111931 kernel: Secure boot disabled Mar 7 01:03:08.111948 kernel: ACPI: Early table checksum verification disabled Mar 7 01:03:08.111964 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Mar 7 01:03:08.111980 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Mar 7 01:03:08.111997 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Mar 7 01:03:08.112036 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Mar 7 01:03:08.112058 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Mar 7 01:03:08.112083 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Mar 7 01:03:08.112102 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Mar 7 01:03:08.112119 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Mar 7 01:03:08.112137 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Mar 7 01:03:08.112156 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Mar 7 01:03:08.112178 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Mar 7 01:03:08.112196 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Mar 7 01:03:08.112214 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Mar 7 01:03:08.112232 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Mar 7 01:03:08.112250 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Mar 7 01:03:08.112268 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Mar 7 01:03:08.112286 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Mar 7 01:03:08.112304 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Mar 7 01:03:08.112321 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Mar 7 01:03:08.112344 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Mar 7 01:03:08.112362 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:03:08.112380 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:03:08.112398 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 7 01:03:08.112416 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Mar 7 01:03:08.112433 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Mar 7 01:03:08.112450 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Mar 7 01:03:08.112467 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Mar 7 01:03:08.112486 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Mar 7 01:03:08.112508 kernel: Zone ranges: Mar 7 01:03:08.112526 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:03:08.112544 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:03:08.112560 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Mar 7 01:03:08.112578 kernel: Movable zone start for each node Mar 7 01:03:08.112595 kernel: Early memory node ranges Mar 7 01:03:08.112612 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Mar 7 01:03:08.112630 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Mar 7 01:03:08.112646 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Mar 7 01:03:08.112668 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Mar 7 01:03:08.112686 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Mar 7 01:03:08.112705 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Mar 7 01:03:08.112724 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:03:08.112742 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Mar 7 01:03:08.112760 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Mar 7 01:03:08.112777 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 7 01:03:08.112796 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Mar 7 01:03:08.112814 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 7 01:03:08.112836 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:03:08.112854 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:03:08.112873 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:03:08.112891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:03:08.112909 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:03:08.112927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:03:08.112945 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:03:08.112963 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:03:08.112981 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 7 01:03:08.113003 kernel: Booting paravirtualized kernel on KVM Mar 7 01:03:08.113037 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:03:08.113055 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:03:08.113081 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:03:08.113099 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:03:08.113117 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:03:08.113134 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:03:08.113153 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:03:08.113173 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:03:08.113197 kernel: random: crng init done Mar 7 01:03:08.113213 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 7 01:03:08.113232 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:03:08.113250 kernel: Fallback order for Node 0: 0 Mar 7 01:03:08.113268 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Mar 7 01:03:08.113286 kernel: Policy zone: Normal Mar 7 01:03:08.113304 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:03:08.113322 kernel: software IO TLB: area num 2. Mar 7 01:03:08.113341 kernel: Memory: 7513180K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 347144K reserved, 0K cma-reserved) Mar 7 01:03:08.113364 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:03:08.113382 kernel: Kernel/User page tables isolation: enabled Mar 7 01:03:08.113400 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:03:08.113418 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:03:08.113436 kernel: Dynamic Preempt: voluntary Mar 7 01:03:08.113454 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:03:08.113473 kernel: rcu: RCU event tracing is enabled. Mar 7 01:03:08.113493 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:03:08.113530 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:03:08.113549 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:03:08.113568 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:03:08.113592 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:03:08.113612 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:03:08.113631 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:03:08.113651 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:03:08.113671 kernel: Console: colour dummy device 80x25 Mar 7 01:03:08.113694 kernel: printk: console [ttyS0] enabled Mar 7 01:03:08.113713 kernel: ACPI: Core revision 20230628 Mar 7 01:03:08.113732 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:03:08.113752 kernel: x2apic enabled Mar 7 01:03:08.113771 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:03:08.113790 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Mar 7 01:03:08.113810 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 7 01:03:08.113829 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Mar 7 01:03:08.113849 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Mar 7 01:03:08.113872 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Mar 7 01:03:08.113891 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:03:08.113910 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Mar 7 01:03:08.113929 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Mar 7 01:03:08.113947 kernel: Spectre V2 : Mitigation: IBRS Mar 7 01:03:08.113966 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:03:08.113984 kernel: RETBleed: Mitigation: IBRS Mar 7 01:03:08.114003 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 7 01:03:08.114043 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Mar 7 01:03:08.114072 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 7 01:03:08.114088 kernel: MDS: Mitigation: Clear CPU buffers Mar 7 01:03:08.114105 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:03:08.114121 kernel: active return thunk: its_return_thunk Mar 7 01:03:08.114136 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:03:08.114154 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:03:08.114172 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:03:08.114191 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:03:08.114209 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:03:08.114233 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 7 01:03:08.114252 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:03:08.114270 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:03:08.114287 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:03:08.114304 kernel: landlock: Up and running. Mar 7 01:03:08.114321 kernel: SELinux: Initializing. Mar 7 01:03:08.114339 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:03:08.114358 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:03:08.114376 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Mar 7 01:03:08.114398 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:03:08.114416 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:03:08.114435 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:03:08.114454 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Mar 7 01:03:08.114473 kernel: signal: max sigframe size: 1776 Mar 7 01:03:08.114492 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:03:08.114511 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:03:08.114530 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:03:08.114549 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:03:08.114572 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:03:08.114590 kernel: .... node #0, CPUs: #1 Mar 7 01:03:08.114610 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 7 01:03:08.114629 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 7 01:03:08.114647 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:03:08.114665 kernel: smpboot: Max logical packages: 1 Mar 7 01:03:08.114682 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Mar 7 01:03:08.114700 kernel: devtmpfs: initialized Mar 7 01:03:08.114723 kernel: x86/mm: Memory block size: 128MB Mar 7 01:03:08.114742 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Mar 7 01:03:08.114762 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:03:08.114781 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:03:08.114800 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:03:08.114818 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:03:08.114835 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:03:08.114853 kernel: audit: type=2000 audit(1772845387.054:1): state=initialized audit_enabled=0 res=1 Mar 7 01:03:08.114869 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:03:08.114891 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:03:08.114909 kernel: cpuidle: using governor menu Mar 7 01:03:08.114927 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:03:08.114945 kernel: dca service started, version 1.12.1 Mar 7 01:03:08.114963 kernel: PCI: Using configuration type 1 for base access Mar 7 01:03:08.114982 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:03:08.115000 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:03:08.115036 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:03:08.115056 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:03:08.115087 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:03:08.115106 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:03:08.115124 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:03:08.115142 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:03:08.115160 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 7 01:03:08.115179 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:03:08.115198 kernel: ACPI: Interpreter enabled Mar 7 01:03:08.115215 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:03:08.115231 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:03:08.115255 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:03:08.115274 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 7 01:03:08.115293 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Mar 7 01:03:08.115313 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:03:08.115584 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:03:08.115794 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 7 01:03:08.115989 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 7 01:03:08.116043 kernel: PCI host bridge to bus 0000:00 Mar 7 01:03:08.116247 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:03:08.116425 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:03:08.116595 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:03:08.116764 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Mar 7 01:03:08.116934 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:03:08.117183 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 7 01:03:08.117398 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Mar 7 01:03:08.117600 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 7 01:03:08.117790 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 7 01:03:08.117989 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Mar 7 01:03:08.118226 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 7 01:03:08.118428 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Mar 7 01:03:08.118670 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:03:08.118861 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Mar 7 01:03:08.119080 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Mar 7 01:03:08.119276 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 01:03:08.119462 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Mar 7 01:03:08.119647 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Mar 7 01:03:08.119669 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:03:08.119695 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:03:08.119714 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:03:08.119732 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:03:08.119751 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 7 01:03:08.119770 kernel: iommu: Default domain type: Translated Mar 7 01:03:08.119787 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:03:08.119804 kernel: efivars: Registered efivars operations Mar 7 01:03:08.119834 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:03:08.119854 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:03:08.119878 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Mar 7 01:03:08.119899 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Mar 7 01:03:08.119917 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Mar 7 01:03:08.119934 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Mar 7 01:03:08.119954 kernel: vgaarb: loaded Mar 7 01:03:08.119974 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:03:08.119994 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:03:08.120072 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:03:08.120092 kernel: pnp: PnP ACPI init Mar 7 01:03:08.120117 kernel: pnp: PnP ACPI: found 7 devices Mar 7 01:03:08.120137 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:03:08.120157 kernel: NET: Registered PF_INET protocol family Mar 7 01:03:08.120177 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:03:08.120198 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 7 01:03:08.120218 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:03:08.120238 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:03:08.120256 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 7 01:03:08.120277 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 7 01:03:08.120301 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:03:08.120321 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:03:08.120341 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:03:08.120360 kernel: NET: Registered PF_XDP protocol family Mar 7 01:03:08.120552 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:03:08.120723 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:03:08.120895 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:03:08.121117 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Mar 7 01:03:08.121357 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 7 01:03:08.121384 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:03:08.121403 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:03:08.121422 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Mar 7 01:03:08.121441 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:03:08.121460 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 7 01:03:08.121478 kernel: clocksource: Switched to clocksource tsc Mar 7 01:03:08.121496 kernel: Initialise system trusted keyrings Mar 7 01:03:08.121521 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 7 01:03:08.121540 kernel: Key type asymmetric registered Mar 7 01:03:08.121558 kernel: Asymmetric key parser 'x509' registered Mar 7 01:03:08.121576 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:03:08.121594 kernel: io scheduler mq-deadline registered Mar 7 01:03:08.121612 kernel: io scheduler kyber registered Mar 7 01:03:08.121631 kernel: io scheduler bfq registered Mar 7 01:03:08.121649 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:03:08.121668 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 7 01:03:08.121862 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Mar 7 01:03:08.121886 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Mar 7 01:03:08.122127 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Mar 7 01:03:08.122152 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 7 01:03:08.122333 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Mar 7 01:03:08.122357 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:03:08.122376 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:03:08.122394 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 7 01:03:08.122413 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Mar 7 01:03:08.122437 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Mar 7 01:03:08.122624 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Mar 7 01:03:08.122649 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:03:08.122667 kernel: i8042: Warning: Keylock active Mar 7 01:03:08.122686 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:03:08.122704 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:03:08.122892 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 7 01:03:08.123132 kernel: rtc_cmos 00:00: registered as rtc0 Mar 7 01:03:08.123307 kernel: rtc_cmos 00:00: setting system clock to 2026-03-07T01:03:07 UTC (1772845387) Mar 7 01:03:08.123476 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 7 01:03:08.123498 kernel: intel_pstate: CPU model not supported Mar 7 01:03:08.123517 kernel: pstore: Using crash dump compression: deflate Mar 7 01:03:08.123536 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:03:08.123554 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:03:08.123572 kernel: Segment Routing with IPv6 Mar 7 01:03:08.123591 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:03:08.123615 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:03:08.123634 kernel: Key type dns_resolver registered Mar 7 01:03:08.123651 kernel: IPI shorthand broadcast: enabled Mar 7 01:03:08.123670 kernel: sched_clock: Marking stable (895004389, 161330153)->(1107793671, -51459129) Mar 7 01:03:08.123688 kernel: registered taskstats version 1 Mar 7 01:03:08.123706 kernel: Loading compiled-in X.509 certificates Mar 7 01:03:08.123725 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:03:08.123743 kernel: Key type .fscrypt registered Mar 7 01:03:08.123761 kernel: Key type fscrypt-provisioning registered Mar 7 01:03:08.123783 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:03:08.123801 kernel: ima: No architecture policies found Mar 7 01:03:08.123819 kernel: clk: Disabling unused clocks Mar 7 01:03:08.123837 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:03:08.123856 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:03:08.123874 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:03:08.123893 kernel: Run /init as init process Mar 7 01:03:08.123911 kernel: with arguments: Mar 7 01:03:08.123929 kernel: /init Mar 7 01:03:08.123951 kernel: with environment: Mar 7 01:03:08.123969 kernel: HOME=/ Mar 7 01:03:08.123986 kernel: TERM=linux Mar 7 01:03:08.124006 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 7 01:03:08.124051 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:03:08.124080 systemd[1]: Detected virtualization google. Mar 7 01:03:08.124099 systemd[1]: Detected architecture x86-64. Mar 7 01:03:08.124123 systemd[1]: Running in initrd. Mar 7 01:03:08.124142 systemd[1]: No hostname configured, using default hostname. Mar 7 01:03:08.124160 systemd[1]: Hostname set to . Mar 7 01:03:08.124180 systemd[1]: Initializing machine ID from random generator. Mar 7 01:03:08.124199 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:03:08.124219 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:03:08.124238 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:03:08.124258 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:03:08.124282 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:03:08.124301 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:03:08.124321 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:03:08.124343 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:03:08.124364 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:03:08.124383 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:03:08.124403 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:03:08.124427 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:03:08.124447 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:03:08.124486 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:03:08.124510 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:03:08.124530 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:03:08.124550 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:03:08.124574 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:03:08.124594 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:03:08.124615 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:03:08.124635 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:03:08.124655 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:03:08.124675 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:03:08.124695 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:03:08.124715 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:03:08.124736 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:03:08.124760 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:03:08.124780 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:03:08.124800 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:03:08.124820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:03:08.124874 systemd-journald[184]: Collecting audit messages is disabled. Mar 7 01:03:08.124921 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:03:08.124942 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:03:08.124962 systemd-journald[184]: Journal started Mar 7 01:03:08.125002 systemd-journald[184]: Runtime Journal (/run/log/journal/fff89a9a620c4e9bbecb8e289a554750) is 8.0M, max 148.7M, 140.7M free. Mar 7 01:03:08.128066 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:03:08.130728 systemd-modules-load[185]: Inserted module 'overlay' Mar 7 01:03:08.137776 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:03:08.148284 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:03:08.173226 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:03:08.180034 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:03:08.184047 kernel: Bridge firewalling registered Mar 7 01:03:08.183264 systemd-modules-load[185]: Inserted module 'br_netfilter' Mar 7 01:03:08.187345 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:03:08.188677 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:03:08.202422 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:03:08.206361 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:03:08.222287 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:03:08.235279 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:03:08.238242 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:03:08.254404 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:03:08.266259 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:03:08.270601 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:03:08.276479 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:03:08.294246 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:03:08.308490 systemd-resolved[211]: Positive Trust Anchors: Mar 7 01:03:08.308508 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:03:08.308568 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:03:08.339288 dracut-cmdline[219]: dracut-dracut-053 Mar 7 01:03:08.339288 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:03:08.312686 systemd-resolved[211]: Defaulting to hostname 'linux'. Mar 7 01:03:08.314430 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:03:08.320613 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:03:08.430065 kernel: SCSI subsystem initialized Mar 7 01:03:08.443061 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:03:08.454047 kernel: iscsi: registered transport (tcp) Mar 7 01:03:08.479416 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:03:08.479500 kernel: QLogic iSCSI HBA Driver Mar 7 01:03:08.533003 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:03:08.544236 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:03:08.588215 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:03:08.588313 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:03:08.588340 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:03:08.633059 kernel: raid6: avx2x4 gen() 18279 MB/s Mar 7 01:03:08.650057 kernel: raid6: avx2x2 gen() 18019 MB/s Mar 7 01:03:08.667473 kernel: raid6: avx2x1 gen() 14077 MB/s Mar 7 01:03:08.667526 kernel: raid6: using algorithm avx2x4 gen() 18279 MB/s Mar 7 01:03:08.685556 kernel: raid6: .... xor() 7524 MB/s, rmw enabled Mar 7 01:03:08.685622 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:03:08.709060 kernel: xor: automatically using best checksumming function avx Mar 7 01:03:08.883068 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:03:08.896933 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:03:08.904314 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:03:08.935658 systemd-udevd[401]: Using default interface naming scheme 'v255'. Mar 7 01:03:08.943127 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:03:08.952235 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:03:08.988818 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Mar 7 01:03:09.027150 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:03:09.038212 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:03:09.135512 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:03:09.148311 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:03:09.185257 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:03:09.196711 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:03:09.201119 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:03:09.203164 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:03:09.212252 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:03:09.262057 kernel: scsi host0: Virtio SCSI HBA Mar 7 01:03:09.264533 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:03:09.282130 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:03:09.290046 kernel: blk-mq: reduced tag depth to 10240 Mar 7 01:03:09.324570 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:03:09.324644 kernel: AES CTR mode by8 optimization enabled Mar 7 01:03:09.332721 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Mar 7 01:03:09.351634 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:03:09.351846 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:03:09.357143 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:03:09.366124 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:03:09.366358 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:03:09.366638 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:03:09.391403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:03:09.418314 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Mar 7 01:03:09.418621 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Mar 7 01:03:09.419592 kernel: sd 0:0:1:0: [sda] Write Protect is off Mar 7 01:03:09.419868 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Mar 7 01:03:09.422043 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 01:03:09.424238 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:03:09.430418 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:03:09.430466 kernel: GPT:17805311 != 33554431 Mar 7 01:03:09.430492 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:03:09.430516 kernel: GPT:17805311 != 33554431 Mar 7 01:03:09.430539 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:03:09.430571 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:03:09.430595 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Mar 7 01:03:09.441283 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:03:09.491058 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (453) Mar 7 01:03:09.494645 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:03:09.500310 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (464) Mar 7 01:03:09.524729 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Mar 7 01:03:09.532790 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Mar 7 01:03:09.540688 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 7 01:03:09.547710 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Mar 7 01:03:09.547948 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Mar 7 01:03:09.566245 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:03:09.581236 disk-uuid[554]: Primary Header is updated. Mar 7 01:03:09.581236 disk-uuid[554]: Secondary Entries is updated. Mar 7 01:03:09.581236 disk-uuid[554]: Secondary Header is updated. Mar 7 01:03:09.594139 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:03:09.611072 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:03:09.629056 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:03:10.629852 disk-uuid[555]: The operation has completed successfully. Mar 7 01:03:10.639182 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:03:10.708768 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:03:10.708978 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:03:10.733230 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:03:10.772404 sh[572]: Success Mar 7 01:03:10.797042 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 7 01:03:10.882962 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:03:10.890362 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:03:10.914567 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:03:10.960259 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:03:10.960344 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:03:10.960370 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:03:10.969699 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:03:10.982219 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:03:11.014075 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:03:11.020371 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:03:11.021401 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:03:11.026393 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:03:11.114080 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:03:11.114111 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:03:11.114127 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:03:11.114144 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:03:11.114167 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:03:11.048230 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:03:11.136230 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:03:11.123232 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:03:11.148792 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:03:11.173251 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:03:11.288580 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:03:11.311327 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:03:11.381412 ignition[659]: Ignition 2.19.0 Mar 7 01:03:11.385464 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:03:11.381434 ignition[659]: Stage: fetch-offline Mar 7 01:03:11.393994 systemd-networkd[756]: lo: Link UP Mar 7 01:03:11.381507 ignition[659]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:03:11.394000 systemd-networkd[756]: lo: Gained carrier Mar 7 01:03:11.381532 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:03:11.395839 systemd-networkd[756]: Enumeration completed Mar 7 01:03:11.381744 ignition[659]: parsed url from cmdline: "" Mar 7 01:03:11.396458 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:03:11.381751 ignition[659]: no config URL provided Mar 7 01:03:11.396466 systemd-networkd[756]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:03:11.381760 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:03:11.398709 systemd-networkd[756]: eth0: Link UP Mar 7 01:03:11.381785 ignition[659]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:03:11.398716 systemd-networkd[756]: eth0: Gained carrier Mar 7 01:03:11.381802 ignition[659]: failed to fetch config: resource requires networking Mar 7 01:03:11.398728 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:03:11.383758 ignition[659]: Ignition finished successfully Mar 7 01:03:11.416481 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:03:11.509743 ignition[764]: Ignition 2.19.0 Mar 7 01:03:11.418136 systemd-networkd[756]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521' Mar 7 01:03:11.509752 ignition[764]: Stage: fetch Mar 7 01:03:11.418152 systemd-networkd[756]: eth0: DHCPv4 address 10.128.0.69/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 7 01:03:11.509966 ignition[764]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:03:11.423936 systemd[1]: Reached target network.target - Network. Mar 7 01:03:11.509978 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:03:11.462260 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:03:11.510144 ignition[764]: parsed url from cmdline: "" Mar 7 01:03:11.518822 unknown[764]: fetched base config from "system" Mar 7 01:03:11.510151 ignition[764]: no config URL provided Mar 7 01:03:11.518834 unknown[764]: fetched base config from "system" Mar 7 01:03:11.510161 ignition[764]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:03:11.518846 unknown[764]: fetched user config from "gcp" Mar 7 01:03:11.510173 ignition[764]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:03:11.521286 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:03:11.510196 ignition[764]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Mar 7 01:03:11.545233 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:03:11.513944 ignition[764]: GET result: OK Mar 7 01:03:11.569684 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:03:11.514045 ignition[764]: parsing config with SHA512: 3be4b95e9eb1872ab305adce493673ab47f3fb4246d98d1f020250ae2dbad402dd215846a92a5f9f7519a40b5e96f483cd94a8d56b76d949d25bfe10ee3dfb01 Mar 7 01:03:11.594250 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:03:11.519353 ignition[764]: fetch: fetch complete Mar 7 01:03:11.632232 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:03:11.519359 ignition[764]: fetch: fetch passed Mar 7 01:03:11.643396 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:03:11.519411 ignition[764]: Ignition finished successfully Mar 7 01:03:11.660202 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:03:11.567213 ignition[771]: Ignition 2.19.0 Mar 7 01:03:11.694256 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:03:11.567223 ignition[771]: Stage: kargs Mar 7 01:03:11.708214 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:03:11.567429 ignition[771]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:03:11.729204 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:03:11.567444 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:03:11.750260 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:03:11.568539 ignition[771]: kargs: kargs passed Mar 7 01:03:11.568599 ignition[771]: Ignition finished successfully Mar 7 01:03:11.629895 ignition[776]: Ignition 2.19.0 Mar 7 01:03:11.629905 ignition[776]: Stage: disks Mar 7 01:03:11.630166 ignition[776]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:03:11.630180 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:03:11.631136 ignition[776]: disks: disks passed Mar 7 01:03:11.631201 ignition[776]: Ignition finished successfully Mar 7 01:03:11.803785 systemd-fsck[785]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 7 01:03:11.998187 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:03:12.028206 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:03:12.149070 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:03:12.149544 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:03:12.150470 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:03:12.183148 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:03:12.208186 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:03:12.238235 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (793) Mar 7 01:03:12.238278 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:03:12.238305 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:03:12.209005 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:03:12.288223 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:03:12.288274 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:03:12.288300 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:03:12.209122 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:03:12.209165 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:03:12.272069 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:03:12.315746 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:03:12.338263 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:03:12.467988 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:03:12.478189 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:03:12.489364 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:03:12.499159 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:03:12.632194 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:03:12.637183 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:03:12.675052 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:03:12.683312 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:03:12.693280 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:03:12.718111 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:03:12.738192 ignition[905]: INFO : Ignition 2.19.0 Mar 7 01:03:12.738192 ignition[905]: INFO : Stage: mount Mar 7 01:03:12.738192 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:03:12.738192 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:03:12.738192 ignition[905]: INFO : mount: mount passed Mar 7 01:03:12.738192 ignition[905]: INFO : Ignition finished successfully Mar 7 01:03:12.739007 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:03:12.751141 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:03:13.155261 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:03:13.201051 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (917) Mar 7 01:03:13.218987 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:03:13.219099 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:03:13.219127 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:03:13.242404 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:03:13.242490 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:03:13.245979 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:03:13.280167 systemd-networkd[756]: eth0: Gained IPv6LL Mar 7 01:03:13.288924 ignition[934]: INFO : Ignition 2.19.0 Mar 7 01:03:13.288924 ignition[934]: INFO : Stage: files Mar 7 01:03:13.303222 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:03:13.303222 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:03:13.303222 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:03:13.303222 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:03:13.303222 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:03:13.303222 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:03:13.303222 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:03:13.303222 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:03:13.303222 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:03:13.303222 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:03:13.299202 unknown[934]: wrote ssh authorized keys file for user: core Mar 7 01:03:13.441207 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:03:13.525122 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 7 01:03:13.988336 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:03:15.062779 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:03:15.062779 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:03:15.102298 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:03:15.102298 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:03:15.102298 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:03:15.102298 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:03:15.102298 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:03:15.102298 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:03:15.102298 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:03:15.102298 ignition[934]: INFO : files: files passed Mar 7 01:03:15.102298 ignition[934]: INFO : Ignition finished successfully Mar 7 01:03:15.068988 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:03:15.087286 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:03:15.119046 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:03:15.141736 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:03:15.336382 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:03:15.336382 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:03:15.141895 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:03:15.403193 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:03:15.166539 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:03:15.181327 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:03:15.209311 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:03:15.297470 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:03:15.297596 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:03:15.315983 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:03:15.336215 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:03:15.353362 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:03:15.360226 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:03:15.413797 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:03:15.434586 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:03:15.472179 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:03:15.483349 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:03:15.502430 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:03:15.521406 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:03:15.521617 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:03:15.553414 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:03:15.576312 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:03:15.595344 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:03:15.616420 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:03:15.638421 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:03:15.659346 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:03:15.679332 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:03:15.698398 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:03:15.716401 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:03:15.738358 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:03:15.756319 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:03:15.756490 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:03:15.784505 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:03:15.804388 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:03:15.825327 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:03:15.961456 ignition[986]: INFO : Ignition 2.19.0 Mar 7 01:03:15.961456 ignition[986]: INFO : Stage: umount Mar 7 01:03:15.961456 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:03:15.961456 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:03:15.961456 ignition[986]: INFO : umount: umount passed Mar 7 01:03:15.961456 ignition[986]: INFO : Ignition finished successfully Mar 7 01:03:15.825499 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:03:15.846433 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:03:15.846643 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:03:15.877485 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:03:15.877716 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:03:15.898416 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:03:15.898574 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:03:15.922298 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:03:15.937352 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:03:15.977208 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:03:15.977591 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:03:15.989549 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:03:15.989757 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:03:16.025657 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:03:16.026760 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:03:16.026878 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:03:16.043880 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:03:16.044001 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:03:16.063196 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:03:16.063321 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:03:16.069906 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:03:16.069975 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:03:16.099326 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:03:16.099418 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:03:16.117300 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:03:16.117387 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:03:16.137319 systemd[1]: Stopped target network.target - Network. Mar 7 01:03:16.137390 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:03:16.137494 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:03:16.165262 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:03:16.184209 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:03:16.186103 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:03:16.203212 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:03:16.218214 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:03:16.235252 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:03:16.235344 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:03:16.253279 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:03:16.253376 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:03:16.273261 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:03:16.273364 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:03:16.293269 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:03:16.293364 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:03:16.313245 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:03:16.313339 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:03:16.333543 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:03:16.339096 systemd-networkd[756]: eth0: DHCPv6 lease lost Mar 7 01:03:16.351376 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:03:16.369819 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:03:16.369959 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:03:16.379883 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:03:16.380171 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:03:16.396606 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:03:16.937161 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Mar 7 01:03:16.396669 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:03:16.418250 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:03:16.429324 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:03:16.429407 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:03:16.447408 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:03:16.447482 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:03:16.465411 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:03:16.465487 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:03:16.490402 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:03:16.490484 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:03:16.518473 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:03:16.537789 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:03:16.537957 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:03:16.553349 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:03:16.553430 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:03:16.573384 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:03:16.573439 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:03:16.601308 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:03:16.601391 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:03:16.629407 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:03:16.629502 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:03:16.659407 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:03:16.659500 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:03:16.692200 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:03:16.730126 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:03:16.730235 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:03:16.748276 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:03:16.748373 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:03:16.769258 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:03:16.769353 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:03:16.790252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:03:16.790344 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:03:16.811872 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:03:16.811999 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:03:16.821705 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:03:16.821822 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:03:16.850665 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:03:16.861230 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:03:16.899635 systemd[1]: Switching root. Mar 7 01:03:17.328153 systemd-journald[184]: Journal stopped Mar 7 01:03:08.111322 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:03:08.111369 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:03:08.111388 kernel: BIOS-provided physical RAM map: Mar 7 01:03:08.111402 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Mar 7 01:03:08.111416 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Mar 7 01:03:08.111430 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Mar 7 01:03:08.111447 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Mar 7 01:03:08.111466 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Mar 7 01:03:08.111481 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Mar 7 01:03:08.111495 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Mar 7 01:03:08.111510 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Mar 7 01:03:08.111525 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Mar 7 01:03:08.111539 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Mar 7 01:03:08.111555 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Mar 7 01:03:08.111577 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Mar 7 01:03:08.111594 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Mar 7 01:03:08.111611 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Mar 7 01:03:08.111628 kernel: NX (Execute Disable) protection: active Mar 7 01:03:08.111644 kernel: APIC: Static calls initialized Mar 7 01:03:08.111660 kernel: efi: EFI v2.7 by EDK II Mar 7 01:03:08.111677 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Mar 7 01:03:08.111694 kernel: SMBIOS 2.4 present. Mar 7 01:03:08.111711 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Mar 7 01:03:08.111728 kernel: Hypervisor detected: KVM Mar 7 01:03:08.111748 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:03:08.111765 kernel: kvm-clock: using sched offset of 12621608291 cycles Mar 7 01:03:08.111781 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:03:08.111796 kernel: tsc: Detected 2299.998 MHz processor Mar 7 01:03:08.111813 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:03:08.111830 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:03:08.111847 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Mar 7 01:03:08.111864 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Mar 7 01:03:08.111881 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:03:08.111901 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 7 01:03:08.111916 kernel: Using GB pages for direct mapping Mar 7 01:03:08.111931 kernel: Secure boot disabled Mar 7 01:03:08.111948 kernel: ACPI: Early table checksum verification disabled Mar 7 01:03:08.111964 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Mar 7 01:03:08.111980 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Mar 7 01:03:08.111997 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Mar 7 01:03:08.112036 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Mar 7 01:03:08.112058 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Mar 7 01:03:08.112083 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Mar 7 01:03:08.112102 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Mar 7 01:03:08.112119 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Mar 7 01:03:08.112137 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Mar 7 01:03:08.112156 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Mar 7 01:03:08.112178 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Mar 7 01:03:08.112196 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Mar 7 01:03:08.112214 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Mar 7 01:03:08.112232 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Mar 7 01:03:08.112250 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Mar 7 01:03:08.112268 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Mar 7 01:03:08.112286 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Mar 7 01:03:08.112304 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Mar 7 01:03:08.112321 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Mar 7 01:03:08.112344 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Mar 7 01:03:08.112362 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:03:08.112380 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:03:08.112398 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 7 01:03:08.112416 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Mar 7 01:03:08.112433 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Mar 7 01:03:08.112450 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Mar 7 01:03:08.112467 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Mar 7 01:03:08.112486 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Mar 7 01:03:08.112508 kernel: Zone ranges: Mar 7 01:03:08.112526 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:03:08.112544 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:03:08.112560 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Mar 7 01:03:08.112578 kernel: Movable zone start for each node Mar 7 01:03:08.112595 kernel: Early memory node ranges Mar 7 01:03:08.112612 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Mar 7 01:03:08.112630 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Mar 7 01:03:08.112646 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Mar 7 01:03:08.112668 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Mar 7 01:03:08.112686 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Mar 7 01:03:08.112705 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Mar 7 01:03:08.112724 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:03:08.112742 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Mar 7 01:03:08.112760 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Mar 7 01:03:08.112777 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 7 01:03:08.112796 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Mar 7 01:03:08.112814 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 7 01:03:08.112836 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:03:08.112854 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:03:08.112873 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:03:08.112891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:03:08.112909 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:03:08.112927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:03:08.112945 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:03:08.112963 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:03:08.112981 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 7 01:03:08.113003 kernel: Booting paravirtualized kernel on KVM Mar 7 01:03:08.113037 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:03:08.113055 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:03:08.113081 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:03:08.113099 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:03:08.113117 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:03:08.113134 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:03:08.113153 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:03:08.113173 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:03:08.113197 kernel: random: crng init done Mar 7 01:03:08.113213 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 7 01:03:08.113232 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:03:08.113250 kernel: Fallback order for Node 0: 0 Mar 7 01:03:08.113268 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Mar 7 01:03:08.113286 kernel: Policy zone: Normal Mar 7 01:03:08.113304 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:03:08.113322 kernel: software IO TLB: area num 2. Mar 7 01:03:08.113341 kernel: Memory: 7513180K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 347144K reserved, 0K cma-reserved) Mar 7 01:03:08.113364 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:03:08.113382 kernel: Kernel/User page tables isolation: enabled Mar 7 01:03:08.113400 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:03:08.113418 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:03:08.113436 kernel: Dynamic Preempt: voluntary Mar 7 01:03:08.113454 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:03:08.113473 kernel: rcu: RCU event tracing is enabled. Mar 7 01:03:08.113493 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:03:08.113530 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:03:08.113549 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:03:08.113568 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:03:08.113592 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:03:08.113612 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:03:08.113631 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:03:08.113651 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:03:08.113671 kernel: Console: colour dummy device 80x25 Mar 7 01:03:08.113694 kernel: printk: console [ttyS0] enabled Mar 7 01:03:08.113713 kernel: ACPI: Core revision 20230628 Mar 7 01:03:08.113732 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:03:08.113752 kernel: x2apic enabled Mar 7 01:03:08.113771 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:03:08.113790 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Mar 7 01:03:08.113810 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 7 01:03:08.113829 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Mar 7 01:03:08.113849 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Mar 7 01:03:08.113872 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Mar 7 01:03:08.113891 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:03:08.113910 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Mar 7 01:03:08.113929 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Mar 7 01:03:08.113947 kernel: Spectre V2 : Mitigation: IBRS Mar 7 01:03:08.113966 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:03:08.113984 kernel: RETBleed: Mitigation: IBRS Mar 7 01:03:08.114003 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 7 01:03:08.114043 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Mar 7 01:03:08.114072 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 7 01:03:08.114088 kernel: MDS: Mitigation: Clear CPU buffers Mar 7 01:03:08.114105 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:03:08.114121 kernel: active return thunk: its_return_thunk Mar 7 01:03:08.114136 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:03:08.114154 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:03:08.114172 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:03:08.114191 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:03:08.114209 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:03:08.114233 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 7 01:03:08.114252 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:03:08.114270 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:03:08.114287 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:03:08.114304 kernel: landlock: Up and running. Mar 7 01:03:08.114321 kernel: SELinux: Initializing. Mar 7 01:03:08.114339 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:03:08.114358 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:03:08.114376 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Mar 7 01:03:08.114398 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:03:08.114416 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:03:08.114435 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:03:08.114454 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Mar 7 01:03:08.114473 kernel: signal: max sigframe size: 1776 Mar 7 01:03:08.114492 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:03:08.114511 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:03:08.114530 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:03:08.114549 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:03:08.114572 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:03:08.114590 kernel: .... node #0, CPUs: #1 Mar 7 01:03:08.114610 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 7 01:03:08.114629 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 7 01:03:08.114647 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:03:08.114665 kernel: smpboot: Max logical packages: 1 Mar 7 01:03:08.114682 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Mar 7 01:03:08.114700 kernel: devtmpfs: initialized Mar 7 01:03:08.114723 kernel: x86/mm: Memory block size: 128MB Mar 7 01:03:08.114742 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Mar 7 01:03:08.114762 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:03:08.114781 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:03:08.114800 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:03:08.114818 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:03:08.114835 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:03:08.114853 kernel: audit: type=2000 audit(1772845387.054:1): state=initialized audit_enabled=0 res=1 Mar 7 01:03:08.114869 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:03:08.114891 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:03:08.114909 kernel: cpuidle: using governor menu Mar 7 01:03:08.114927 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:03:08.114945 kernel: dca service started, version 1.12.1 Mar 7 01:03:08.114963 kernel: PCI: Using configuration type 1 for base access Mar 7 01:03:08.114982 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:03:08.115000 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:03:08.115036 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:03:08.115056 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:03:08.115087 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:03:08.115106 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:03:08.115124 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:03:08.115142 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:03:08.115160 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 7 01:03:08.115179 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:03:08.115198 kernel: ACPI: Interpreter enabled Mar 7 01:03:08.115215 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:03:08.115231 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:03:08.115255 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:03:08.115274 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 7 01:03:08.115293 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Mar 7 01:03:08.115313 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:03:08.115584 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:03:08.115794 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 7 01:03:08.115989 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 7 01:03:08.116043 kernel: PCI host bridge to bus 0000:00 Mar 7 01:03:08.116247 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:03:08.116425 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:03:08.116595 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:03:08.116764 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Mar 7 01:03:08.116934 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:03:08.117183 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 7 01:03:08.117398 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Mar 7 01:03:08.117600 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 7 01:03:08.117790 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 7 01:03:08.117989 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Mar 7 01:03:08.118226 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 7 01:03:08.118428 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Mar 7 01:03:08.118670 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:03:08.118861 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Mar 7 01:03:08.119080 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Mar 7 01:03:08.119276 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 01:03:08.119462 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Mar 7 01:03:08.119647 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Mar 7 01:03:08.119669 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:03:08.119695 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:03:08.119714 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:03:08.119732 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:03:08.119751 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 7 01:03:08.119770 kernel: iommu: Default domain type: Translated Mar 7 01:03:08.119787 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:03:08.119804 kernel: efivars: Registered efivars operations Mar 7 01:03:08.119834 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:03:08.119854 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:03:08.119878 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Mar 7 01:03:08.119899 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Mar 7 01:03:08.119917 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Mar 7 01:03:08.119934 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Mar 7 01:03:08.119954 kernel: vgaarb: loaded Mar 7 01:03:08.119974 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:03:08.119994 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:03:08.120072 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:03:08.120092 kernel: pnp: PnP ACPI init Mar 7 01:03:08.120117 kernel: pnp: PnP ACPI: found 7 devices Mar 7 01:03:08.120137 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:03:08.120157 kernel: NET: Registered PF_INET protocol family Mar 7 01:03:08.120177 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:03:08.120198 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 7 01:03:08.120218 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:03:08.120238 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:03:08.120256 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 7 01:03:08.120277 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 7 01:03:08.120301 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:03:08.120321 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:03:08.120341 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:03:08.120360 kernel: NET: Registered PF_XDP protocol family Mar 7 01:03:08.120552 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:03:08.120723 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:03:08.120895 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:03:08.121117 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Mar 7 01:03:08.121357 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 7 01:03:08.121384 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:03:08.121403 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:03:08.121422 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Mar 7 01:03:08.121441 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:03:08.121460 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 7 01:03:08.121478 kernel: clocksource: Switched to clocksource tsc Mar 7 01:03:08.121496 kernel: Initialise system trusted keyrings Mar 7 01:03:08.121521 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 7 01:03:08.121540 kernel: Key type asymmetric registered Mar 7 01:03:08.121558 kernel: Asymmetric key parser 'x509' registered Mar 7 01:03:08.121576 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:03:08.121594 kernel: io scheduler mq-deadline registered Mar 7 01:03:08.121612 kernel: io scheduler kyber registered Mar 7 01:03:08.121631 kernel: io scheduler bfq registered Mar 7 01:03:08.121649 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:03:08.121668 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 7 01:03:08.121862 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Mar 7 01:03:08.121886 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Mar 7 01:03:08.122127 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Mar 7 01:03:08.122152 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 7 01:03:08.122333 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Mar 7 01:03:08.122357 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:03:08.122376 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:03:08.122394 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 7 01:03:08.122413 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Mar 7 01:03:08.122437 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Mar 7 01:03:08.122624 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Mar 7 01:03:08.122649 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:03:08.122667 kernel: i8042: Warning: Keylock active Mar 7 01:03:08.122686 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:03:08.122704 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:03:08.122892 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 7 01:03:08.123132 kernel: rtc_cmos 00:00: registered as rtc0 Mar 7 01:03:08.123307 kernel: rtc_cmos 00:00: setting system clock to 2026-03-07T01:03:07 UTC (1772845387) Mar 7 01:03:08.123476 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 7 01:03:08.123498 kernel: intel_pstate: CPU model not supported Mar 7 01:03:08.123517 kernel: pstore: Using crash dump compression: deflate Mar 7 01:03:08.123536 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:03:08.123554 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:03:08.123572 kernel: Segment Routing with IPv6 Mar 7 01:03:08.123591 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:03:08.123615 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:03:08.123634 kernel: Key type dns_resolver registered Mar 7 01:03:08.123651 kernel: IPI shorthand broadcast: enabled Mar 7 01:03:08.123670 kernel: sched_clock: Marking stable (895004389, 161330153)->(1107793671, -51459129) Mar 7 01:03:08.123688 kernel: registered taskstats version 1 Mar 7 01:03:08.123706 kernel: Loading compiled-in X.509 certificates Mar 7 01:03:08.123725 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:03:08.123743 kernel: Key type .fscrypt registered Mar 7 01:03:08.123761 kernel: Key type fscrypt-provisioning registered Mar 7 01:03:08.123783 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:03:08.123801 kernel: ima: No architecture policies found Mar 7 01:03:08.123819 kernel: clk: Disabling unused clocks Mar 7 01:03:08.123837 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:03:08.123856 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:03:08.123874 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:03:08.123893 kernel: Run /init as init process Mar 7 01:03:08.123911 kernel: with arguments: Mar 7 01:03:08.123929 kernel: /init Mar 7 01:03:08.123951 kernel: with environment: Mar 7 01:03:08.123969 kernel: HOME=/ Mar 7 01:03:08.123986 kernel: TERM=linux Mar 7 01:03:08.124006 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 7 01:03:08.124051 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:03:08.124080 systemd[1]: Detected virtualization google. Mar 7 01:03:08.124099 systemd[1]: Detected architecture x86-64. Mar 7 01:03:08.124123 systemd[1]: Running in initrd. Mar 7 01:03:08.124142 systemd[1]: No hostname configured, using default hostname. Mar 7 01:03:08.124160 systemd[1]: Hostname set to . Mar 7 01:03:08.124180 systemd[1]: Initializing machine ID from random generator. Mar 7 01:03:08.124199 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:03:08.124219 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:03:08.124238 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:03:08.124258 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:03:08.124282 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:03:08.124301 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:03:08.124321 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:03:08.124343 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:03:08.124364 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:03:08.124383 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:03:08.124403 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:03:08.124427 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:03:08.124447 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:03:08.124486 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:03:08.124510 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:03:08.124530 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:03:08.124550 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:03:08.124574 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:03:08.124594 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:03:08.124615 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:03:08.124635 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:03:08.124655 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:03:08.124675 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:03:08.124695 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:03:08.124715 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:03:08.124736 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:03:08.124760 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:03:08.124780 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:03:08.124800 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:03:08.124820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:03:08.124874 systemd-journald[184]: Collecting audit messages is disabled. Mar 7 01:03:08.124921 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:03:08.124942 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:03:08.124962 systemd-journald[184]: Journal started Mar 7 01:03:08.125002 systemd-journald[184]: Runtime Journal (/run/log/journal/fff89a9a620c4e9bbecb8e289a554750) is 8.0M, max 148.7M, 140.7M free. Mar 7 01:03:08.128066 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:03:08.130728 systemd-modules-load[185]: Inserted module 'overlay' Mar 7 01:03:08.137776 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:03:08.148284 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:03:08.173226 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:03:08.180034 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:03:08.184047 kernel: Bridge firewalling registered Mar 7 01:03:08.183264 systemd-modules-load[185]: Inserted module 'br_netfilter' Mar 7 01:03:08.187345 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:03:08.188677 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:03:08.202422 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:03:08.206361 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:03:08.222287 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:03:08.235279 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:03:08.238242 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:03:08.254404 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:03:08.266259 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:03:08.270601 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:03:08.276479 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:03:08.294246 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:03:08.308490 systemd-resolved[211]: Positive Trust Anchors: Mar 7 01:03:08.308508 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:03:08.308568 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:03:08.339288 dracut-cmdline[219]: dracut-dracut-053 Mar 7 01:03:08.339288 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:03:08.312686 systemd-resolved[211]: Defaulting to hostname 'linux'. Mar 7 01:03:08.314430 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:03:08.320613 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:03:08.430065 kernel: SCSI subsystem initialized Mar 7 01:03:08.443061 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:03:08.454047 kernel: iscsi: registered transport (tcp) Mar 7 01:03:08.479416 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:03:08.479500 kernel: QLogic iSCSI HBA Driver Mar 7 01:03:08.533003 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:03:08.544236 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:03:08.588215 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:03:08.588313 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:03:08.588340 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:03:08.633059 kernel: raid6: avx2x4 gen() 18279 MB/s Mar 7 01:03:08.650057 kernel: raid6: avx2x2 gen() 18019 MB/s Mar 7 01:03:08.667473 kernel: raid6: avx2x1 gen() 14077 MB/s Mar 7 01:03:08.667526 kernel: raid6: using algorithm avx2x4 gen() 18279 MB/s Mar 7 01:03:08.685556 kernel: raid6: .... xor() 7524 MB/s, rmw enabled Mar 7 01:03:08.685622 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:03:08.709060 kernel: xor: automatically using best checksumming function avx Mar 7 01:03:08.883068 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:03:08.896933 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:03:08.904314 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:03:08.935658 systemd-udevd[401]: Using default interface naming scheme 'v255'. Mar 7 01:03:08.943127 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:03:08.952235 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:03:08.988818 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Mar 7 01:03:09.027150 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:03:09.038212 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:03:09.135512 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:03:09.148311 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:03:09.185257 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:03:09.196711 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:03:09.201119 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:03:09.203164 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:03:09.212252 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:03:09.262057 kernel: scsi host0: Virtio SCSI HBA Mar 7 01:03:09.264533 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:03:09.282130 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:03:09.290046 kernel: blk-mq: reduced tag depth to 10240 Mar 7 01:03:09.324570 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:03:09.324644 kernel: AES CTR mode by8 optimization enabled Mar 7 01:03:09.332721 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Mar 7 01:03:09.351634 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:03:09.351846 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:03:09.357143 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:03:09.366124 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:03:09.366358 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:03:09.366638 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:03:09.391403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:03:09.418314 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Mar 7 01:03:09.418621 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Mar 7 01:03:09.419592 kernel: sd 0:0:1:0: [sda] Write Protect is off Mar 7 01:03:09.419868 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Mar 7 01:03:09.422043 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 01:03:09.424238 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:03:09.430418 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:03:09.430466 kernel: GPT:17805311 != 33554431 Mar 7 01:03:09.430492 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:03:09.430516 kernel: GPT:17805311 != 33554431 Mar 7 01:03:09.430539 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:03:09.430571 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:03:09.430595 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Mar 7 01:03:09.441283 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:03:09.491058 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (453) Mar 7 01:03:09.494645 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:03:09.500310 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (464) Mar 7 01:03:09.524729 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Mar 7 01:03:09.532790 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Mar 7 01:03:09.540688 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 7 01:03:09.547710 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Mar 7 01:03:09.547948 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Mar 7 01:03:09.566245 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:03:09.581236 disk-uuid[554]: Primary Header is updated. Mar 7 01:03:09.581236 disk-uuid[554]: Secondary Entries is updated. Mar 7 01:03:09.581236 disk-uuid[554]: Secondary Header is updated. Mar 7 01:03:09.594139 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:03:09.611072 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:03:09.629056 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:03:10.629852 disk-uuid[555]: The operation has completed successfully. Mar 7 01:03:10.639182 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:03:10.708768 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:03:10.708978 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:03:10.733230 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:03:10.772404 sh[572]: Success Mar 7 01:03:10.797042 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 7 01:03:10.882962 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:03:10.890362 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:03:10.914567 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:03:10.960259 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:03:10.960344 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:03:10.960370 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:03:10.969699 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:03:10.982219 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:03:11.014075 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:03:11.020371 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:03:11.021401 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:03:11.026393 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:03:11.114080 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:03:11.114111 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:03:11.114127 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:03:11.114144 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:03:11.114167 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:03:11.048230 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:03:11.136230 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:03:11.123232 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:03:11.148792 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:03:11.173251 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:03:11.288580 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:03:11.311327 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:03:11.381412 ignition[659]: Ignition 2.19.0 Mar 7 01:03:11.385464 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:03:11.381434 ignition[659]: Stage: fetch-offline Mar 7 01:03:11.393994 systemd-networkd[756]: lo: Link UP Mar 7 01:03:11.381507 ignition[659]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:03:11.394000 systemd-networkd[756]: lo: Gained carrier Mar 7 01:03:11.381532 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:03:11.395839 systemd-networkd[756]: Enumeration completed Mar 7 01:03:11.381744 ignition[659]: parsed url from cmdline: "" Mar 7 01:03:11.396458 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:03:11.381751 ignition[659]: no config URL provided Mar 7 01:03:11.396466 systemd-networkd[756]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:03:11.381760 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:03:11.398709 systemd-networkd[756]: eth0: Link UP Mar 7 01:03:11.381785 ignition[659]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:03:11.398716 systemd-networkd[756]: eth0: Gained carrier Mar 7 01:03:11.381802 ignition[659]: failed to fetch config: resource requires networking Mar 7 01:03:11.398728 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:03:11.383758 ignition[659]: Ignition finished successfully Mar 7 01:03:11.416481 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:03:11.509743 ignition[764]: Ignition 2.19.0 Mar 7 01:03:11.418136 systemd-networkd[756]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521' Mar 7 01:03:11.509752 ignition[764]: Stage: fetch Mar 7 01:03:11.418152 systemd-networkd[756]: eth0: DHCPv4 address 10.128.0.69/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 7 01:03:11.509966 ignition[764]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:03:11.423936 systemd[1]: Reached target network.target - Network. Mar 7 01:03:11.509978 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:03:11.462260 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:03:11.510144 ignition[764]: parsed url from cmdline: "" Mar 7 01:03:11.518822 unknown[764]: fetched base config from "system" Mar 7 01:03:11.510151 ignition[764]: no config URL provided Mar 7 01:03:11.518834 unknown[764]: fetched base config from "system" Mar 7 01:03:11.510161 ignition[764]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:03:11.518846 unknown[764]: fetched user config from "gcp" Mar 7 01:03:11.510173 ignition[764]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:03:11.521286 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:03:11.510196 ignition[764]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Mar 7 01:03:11.545233 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:03:11.513944 ignition[764]: GET result: OK Mar 7 01:03:11.569684 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:03:11.514045 ignition[764]: parsing config with SHA512: 3be4b95e9eb1872ab305adce493673ab47f3fb4246d98d1f020250ae2dbad402dd215846a92a5f9f7519a40b5e96f483cd94a8d56b76d949d25bfe10ee3dfb01 Mar 7 01:03:11.594250 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:03:11.519353 ignition[764]: fetch: fetch complete Mar 7 01:03:11.632232 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:03:11.519359 ignition[764]: fetch: fetch passed Mar 7 01:03:11.643396 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:03:11.519411 ignition[764]: Ignition finished successfully Mar 7 01:03:11.660202 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:03:11.567213 ignition[771]: Ignition 2.19.0 Mar 7 01:03:11.694256 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:03:11.567223 ignition[771]: Stage: kargs Mar 7 01:03:11.708214 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:03:11.567429 ignition[771]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:03:11.729204 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:03:11.567444 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:03:11.750260 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:03:11.568539 ignition[771]: kargs: kargs passed Mar 7 01:03:11.568599 ignition[771]: Ignition finished successfully Mar 7 01:03:11.629895 ignition[776]: Ignition 2.19.0 Mar 7 01:03:11.629905 ignition[776]: Stage: disks Mar 7 01:03:11.630166 ignition[776]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:03:11.630180 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:03:11.631136 ignition[776]: disks: disks passed Mar 7 01:03:11.631201 ignition[776]: Ignition finished successfully Mar 7 01:03:11.803785 systemd-fsck[785]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 7 01:03:11.998187 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:03:12.028206 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:03:12.149070 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:03:12.149544 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:03:12.150470 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:03:12.183148 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:03:12.208186 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:03:12.238235 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (793) Mar 7 01:03:12.238278 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:03:12.238305 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:03:12.209005 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:03:12.288223 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:03:12.288274 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:03:12.288300 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:03:12.209122 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:03:12.209165 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:03:12.272069 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:03:12.315746 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:03:12.338263 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:03:12.467988 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:03:12.478189 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:03:12.489364 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:03:12.499159 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:03:12.632194 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:03:12.637183 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:03:12.675052 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:03:12.683312 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:03:12.693280 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:03:12.718111 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:03:12.738192 ignition[905]: INFO : Ignition 2.19.0 Mar 7 01:03:12.738192 ignition[905]: INFO : Stage: mount Mar 7 01:03:12.738192 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:03:12.738192 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:03:12.738192 ignition[905]: INFO : mount: mount passed Mar 7 01:03:12.738192 ignition[905]: INFO : Ignition finished successfully Mar 7 01:03:12.739007 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:03:12.751141 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:03:13.155261 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:03:13.201051 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (917) Mar 7 01:03:13.218987 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:03:13.219099 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:03:13.219127 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:03:13.242404 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:03:13.242490 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:03:13.245979 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:03:13.280167 systemd-networkd[756]: eth0: Gained IPv6LL Mar 7 01:03:13.288924 ignition[934]: INFO : Ignition 2.19.0 Mar 7 01:03:13.288924 ignition[934]: INFO : Stage: files Mar 7 01:03:13.303222 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:03:13.303222 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:03:13.303222 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:03:13.303222 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:03:13.303222 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:03:13.303222 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:03:13.303222 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:03:13.303222 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:03:13.303222 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:03:13.303222 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:03:13.299202 unknown[934]: wrote ssh authorized keys file for user: core Mar 7 01:03:13.441207 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:03:13.525122 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:03:13.542167 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 7 01:03:13.988336 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:03:15.062779 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:03:15.062779 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:03:15.102298 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:03:15.102298 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:03:15.102298 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:03:15.102298 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:03:15.102298 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:03:15.102298 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:03:15.102298 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:03:15.102298 ignition[934]: INFO : files: files passed Mar 7 01:03:15.102298 ignition[934]: INFO : Ignition finished successfully Mar 7 01:03:15.068988 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:03:15.087286 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:03:15.119046 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:03:15.141736 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:03:15.336382 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:03:15.336382 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:03:15.141895 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:03:15.403193 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:03:15.166539 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:03:15.181327 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:03:15.209311 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:03:15.297470 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:03:15.297596 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:03:15.315983 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:03:15.336215 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:03:15.353362 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:03:15.360226 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:03:15.413797 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:03:15.434586 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:03:15.472179 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:03:15.483349 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:03:15.502430 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:03:15.521406 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:03:15.521617 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:03:15.553414 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:03:15.576312 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:03:15.595344 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:03:15.616420 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:03:15.638421 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:03:15.659346 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:03:15.679332 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:03:15.698398 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:03:15.716401 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:03:15.738358 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:03:15.756319 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:03:15.756490 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:03:15.784505 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:03:15.804388 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:03:15.825327 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:03:15.961456 ignition[986]: INFO : Ignition 2.19.0 Mar 7 01:03:15.961456 ignition[986]: INFO : Stage: umount Mar 7 01:03:15.961456 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:03:15.961456 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:03:15.961456 ignition[986]: INFO : umount: umount passed Mar 7 01:03:15.961456 ignition[986]: INFO : Ignition finished successfully Mar 7 01:03:15.825499 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:03:15.846433 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:03:15.846643 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:03:15.877485 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:03:15.877716 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:03:15.898416 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:03:15.898574 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:03:15.922298 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:03:15.937352 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:03:15.977208 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:03:15.977591 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:03:15.989549 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:03:15.989757 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:03:16.025657 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:03:16.026760 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:03:16.026878 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:03:16.043880 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:03:16.044001 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:03:16.063196 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:03:16.063321 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:03:16.069906 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:03:16.069975 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:03:16.099326 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:03:16.099418 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:03:16.117300 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:03:16.117387 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:03:16.137319 systemd[1]: Stopped target network.target - Network. Mar 7 01:03:16.137390 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:03:16.137494 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:03:16.165262 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:03:16.184209 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:03:16.186103 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:03:16.203212 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:03:16.218214 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:03:16.235252 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:03:16.235344 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:03:16.253279 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:03:16.253376 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:03:16.273261 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:03:16.273364 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:03:16.293269 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:03:16.293364 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:03:16.313245 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:03:16.313339 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:03:16.333543 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:03:16.339096 systemd-networkd[756]: eth0: DHCPv6 lease lost Mar 7 01:03:16.351376 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:03:16.369819 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:03:16.369959 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:03:16.379883 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:03:16.380171 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:03:16.396606 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:03:16.937161 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Mar 7 01:03:16.396669 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:03:16.418250 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:03:16.429324 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:03:16.429407 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:03:16.447408 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:03:16.447482 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:03:16.465411 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:03:16.465487 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:03:16.490402 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:03:16.490484 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:03:16.518473 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:03:16.537789 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:03:16.537957 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:03:16.553349 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:03:16.553430 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:03:16.573384 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:03:16.573439 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:03:16.601308 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:03:16.601391 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:03:16.629407 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:03:16.629502 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:03:16.659407 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:03:16.659500 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:03:16.692200 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:03:16.730126 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:03:16.730235 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:03:16.748276 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:03:16.748373 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:03:16.769258 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:03:16.769353 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:03:16.790252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:03:16.790344 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:03:16.811872 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:03:16.811999 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:03:16.821705 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:03:16.821822 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:03:16.850665 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:03:16.861230 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:03:16.899635 systemd[1]: Switching root. Mar 7 01:03:17.328153 systemd-journald[184]: Journal stopped Mar 7 01:03:19.602430 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:03:19.602479 kernel: SELinux: policy capability open_perms=1 Mar 7 01:03:19.602501 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:03:19.602519 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:03:19.602537 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:03:19.602556 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:03:19.602577 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:03:19.602600 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:03:19.602619 kernel: audit: type=1403 audit(1772845397.568:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:03:19.602640 systemd[1]: Successfully loaded SELinux policy in 91.185ms. Mar 7 01:03:19.602663 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.910ms. Mar 7 01:03:19.602685 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:03:19.602706 systemd[1]: Detected virtualization google. Mar 7 01:03:19.602726 systemd[1]: Detected architecture x86-64. Mar 7 01:03:19.602753 systemd[1]: Detected first boot. Mar 7 01:03:19.602775 systemd[1]: Initializing machine ID from random generator. Mar 7 01:03:19.602797 zram_generator::config[1028]: No configuration found. Mar 7 01:03:19.602819 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:03:19.602841 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:03:19.602867 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:03:19.602891 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:03:19.602913 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:03:19.602934 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:03:19.602956 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:03:19.602978 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:03:19.603000 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:03:19.603039 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:03:19.603061 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:03:19.603083 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:03:19.603104 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:03:19.603126 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:03:19.603148 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:03:19.603169 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:03:19.603192 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:03:19.603217 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:03:19.603239 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:03:19.603261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:03:19.603282 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:03:19.603304 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:03:19.603326 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:03:19.603353 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:03:19.603377 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:03:19.603400 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:03:19.603433 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:03:19.603456 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:03:19.603478 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:03:19.603501 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:03:19.603523 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:03:19.603545 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:03:19.603568 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:03:19.603596 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:03:19.603619 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:03:19.603641 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:03:19.603664 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:03:19.603686 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:03:19.603713 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:03:19.603735 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:03:19.603763 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:03:19.603787 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:03:19.603810 systemd[1]: Reached target machines.target - Containers. Mar 7 01:03:19.603833 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:03:19.603856 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:03:19.603880 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:03:19.603908 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:03:19.603930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:03:19.603953 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:03:19.603976 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:03:19.603999 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:03:19.604033 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:03:19.604057 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:03:19.604080 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:03:19.604108 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:03:19.604130 kernel: fuse: init (API version 7.39) Mar 7 01:03:19.604151 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:03:19.604173 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:03:19.604196 kernel: loop: module loaded Mar 7 01:03:19.604216 kernel: ACPI: bus type drm_connector registered Mar 7 01:03:19.604238 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:03:19.604290 systemd-journald[1115]: Collecting audit messages is disabled. Mar 7 01:03:19.604340 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:03:19.604364 systemd-journald[1115]: Journal started Mar 7 01:03:19.604406 systemd-journald[1115]: Runtime Journal (/run/log/journal/5cd79951239b40f292e7d488e588662f) is 8.0M, max 148.7M, 140.7M free. Mar 7 01:03:18.426116 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:03:18.449417 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 01:03:18.450083 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:03:19.635254 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:03:19.669061 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:03:19.700043 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:03:19.718040 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:03:19.718126 systemd[1]: Stopped verity-setup.service. Mar 7 01:03:19.748049 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:03:19.758061 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:03:19.768589 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:03:19.778434 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:03:19.788419 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:03:19.798388 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:03:19.808376 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:03:19.818410 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:03:19.828498 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:03:19.840683 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:03:19.852561 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:03:19.852813 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:03:19.864544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:03:19.864791 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:03:19.876606 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:03:19.876871 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:03:19.887523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:03:19.887766 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:03:19.899626 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:03:19.899878 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:03:19.910579 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:03:19.910825 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:03:19.921777 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:03:19.931606 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:03:19.943578 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:03:19.955610 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:03:19.981185 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:03:19.997205 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:03:20.021146 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:03:20.031254 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:03:20.031538 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:03:20.044277 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:03:20.064328 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:03:20.080332 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:03:20.090469 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:03:20.101699 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:03:20.120867 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:03:20.133743 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:03:20.141251 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:03:20.151251 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:03:20.152096 systemd-journald[1115]: Time spent on flushing to /var/log/journal/5cd79951239b40f292e7d488e588662f is 118.185ms for 929 entries. Mar 7 01:03:20.152096 systemd-journald[1115]: System Journal (/var/log/journal/5cd79951239b40f292e7d488e588662f) is 8.0M, max 584.8M, 576.8M free. Mar 7 01:03:20.308435 systemd-journald[1115]: Received client request to flush runtime journal. Mar 7 01:03:20.308577 kernel: loop0: detected capacity change from 0 to 219192 Mar 7 01:03:20.166362 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:03:20.187270 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:03:20.209336 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:03:20.228645 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:03:20.245860 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:03:20.257443 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:03:20.269583 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:03:20.281672 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:03:20.301751 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:03:20.324355 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:03:20.337626 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:03:20.349741 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:03:20.358400 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Mar 7 01:03:20.358443 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Mar 7 01:03:20.384301 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:03:20.389478 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:03:20.410444 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:03:20.423132 udevadm[1148]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 01:03:20.437191 kernel: loop1: detected capacity change from 0 to 54824 Mar 7 01:03:20.435334 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:03:20.437366 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:03:20.500503 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:03:20.520175 kernel: loop2: detected capacity change from 0 to 140768 Mar 7 01:03:20.524421 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:03:20.619194 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 7 01:03:20.619236 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 7 01:03:20.633237 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:03:20.645046 kernel: loop3: detected capacity change from 0 to 142488 Mar 7 01:03:20.750076 kernel: loop4: detected capacity change from 0 to 219192 Mar 7 01:03:20.799037 kernel: loop5: detected capacity change from 0 to 54824 Mar 7 01:03:20.838048 kernel: loop6: detected capacity change from 0 to 140768 Mar 7 01:03:20.909037 kernel: loop7: detected capacity change from 0 to 142488 Mar 7 01:03:20.962627 (sd-merge)[1173]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Mar 7 01:03:20.965939 (sd-merge)[1173]: Merged extensions into '/usr'. Mar 7 01:03:20.975491 systemd[1]: Reloading requested from client PID 1146 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:03:20.975667 systemd[1]: Reloading... Mar 7 01:03:21.079810 zram_generator::config[1195]: No configuration found. Mar 7 01:03:21.375874 ldconfig[1141]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:03:21.440098 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:03:21.537062 systemd[1]: Reloading finished in 560 ms. Mar 7 01:03:21.568990 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:03:21.579811 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:03:21.591670 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:03:21.617284 systemd[1]: Starting ensure-sysext.service... Mar 7 01:03:21.631284 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:03:21.651371 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:03:21.657725 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:03:21.657749 systemd[1]: Reloading... Mar 7 01:03:21.661061 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:03:21.662343 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:03:21.666226 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:03:21.666856 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Mar 7 01:03:21.666983 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Mar 7 01:03:21.674963 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:03:21.675160 systemd-tmpfiles[1241]: Skipping /boot Mar 7 01:03:21.699480 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:03:21.699698 systemd-tmpfiles[1241]: Skipping /boot Mar 7 01:03:21.754350 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Mar 7 01:03:21.802088 zram_generator::config[1268]: No configuration found. Mar 7 01:03:22.127930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:03:22.155106 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 7 01:03:22.226897 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 7 01:03:22.226944 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 7 01:03:22.226979 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:03:22.241038 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1288) Mar 7 01:03:22.270041 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Mar 7 01:03:22.290831 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:03:22.291398 systemd[1]: Reloading finished in 633 ms. Mar 7 01:03:22.313750 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:03:22.320029 kernel: ACPI: button: Sleep Button [SLPF] Mar 7 01:03:22.335770 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:03:22.341149 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:03:22.398314 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:03:22.406498 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:03:22.426263 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:03:22.435652 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:03:22.441429 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:03:22.462415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:03:22.473362 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:03:22.486434 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:03:22.496384 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:03:22.504816 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:03:22.525143 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:03:22.543226 augenrules[1363]: No rules Mar 7 01:03:22.544066 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:03:22.561439 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:03:22.573167 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:03:22.581790 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:03:22.592806 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:03:22.593189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:03:22.605058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:03:22.605318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:03:22.617064 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:03:22.617376 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:03:22.628853 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:03:22.640795 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:03:22.670620 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:03:22.696237 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:03:22.707577 systemd[1]: Finished ensure-sysext.service. Mar 7 01:03:22.720976 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 7 01:03:22.733584 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:03:22.733892 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:03:22.739272 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:03:22.758728 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:03:22.775383 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:03:22.779610 lvm[1379]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:03:22.791281 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:03:22.809413 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:03:22.832306 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 7 01:03:22.841379 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:03:22.850678 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:03:22.862244 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:03:22.869257 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:03:22.892446 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:03:22.910367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:03:22.920205 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:03:22.920282 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:03:22.923089 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:03:22.934810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:03:22.935103 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:03:22.944663 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:03:22.944938 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:03:22.955660 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:03:22.956474 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:03:22.957090 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:03:22.957350 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:03:22.962668 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:03:22.963166 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:03:22.968499 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:03:22.983092 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:03:22.990385 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:03:22.990488 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:03:22.990583 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:03:22.993310 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 7 01:03:23.000193 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Mar 7 01:03:23.013926 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:03:23.105162 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:03:23.131179 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Mar 7 01:03:23.139396 systemd-networkd[1361]: lo: Link UP Mar 7 01:03:23.139416 systemd-networkd[1361]: lo: Gained carrier Mar 7 01:03:23.141997 systemd-networkd[1361]: Enumeration completed Mar 7 01:03:23.142448 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:03:23.142794 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:03:23.142802 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:03:23.143534 systemd-networkd[1361]: eth0: Link UP Mar 7 01:03:23.143541 systemd-networkd[1361]: eth0: Gained carrier Mar 7 01:03:23.143566 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:03:23.152673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:03:23.154141 systemd-networkd[1361]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521' Mar 7 01:03:23.154166 systemd-networkd[1361]: eth0: DHCPv4 address 10.128.0.69/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 7 01:03:23.169407 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:03:23.175804 systemd-resolved[1364]: Positive Trust Anchors: Mar 7 01:03:23.175820 systemd-resolved[1364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:03:23.175862 systemd-resolved[1364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:03:23.183115 systemd-resolved[1364]: Defaulting to hostname 'linux'. Mar 7 01:03:23.191739 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:03:23.201359 systemd[1]: Reached target network.target - Network. Mar 7 01:03:23.210209 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:03:23.221183 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:03:23.231360 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:03:23.242264 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:03:23.253448 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:03:23.263350 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:03:23.274206 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:03:23.285162 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:03:23.285229 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:03:23.294163 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:03:23.302775 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:03:23.313914 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:03:23.327719 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:03:23.338054 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:03:23.348332 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:03:23.358171 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:03:23.366259 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:03:23.366321 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:03:23.372176 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:03:23.395253 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:03:23.416449 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:03:23.435295 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:03:23.460268 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:03:23.467602 jq[1431]: false Mar 7 01:03:23.470173 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:03:23.479281 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:03:23.499261 systemd[1]: Started ntpd.service - Network Time Service. Mar 7 01:03:23.516221 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:03:23.522143 coreos-metadata[1429]: Mar 07 01:03:23.521 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Mar 7 01:03:23.523268 coreos-metadata[1429]: Mar 07 01:03:23.522 INFO Fetch successful Mar 7 01:03:23.523268 coreos-metadata[1429]: Mar 07 01:03:23.522 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Mar 7 01:03:23.527975 coreos-metadata[1429]: Mar 07 01:03:23.525 INFO Fetch successful Mar 7 01:03:23.527975 coreos-metadata[1429]: Mar 07 01:03:23.525 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Mar 7 01:03:23.527975 coreos-metadata[1429]: Mar 07 01:03:23.525 INFO Fetch successful Mar 7 01:03:23.527975 coreos-metadata[1429]: Mar 07 01:03:23.525 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Mar 7 01:03:23.533247 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:03:23.539282 extend-filesystems[1432]: Found loop4 Mar 7 01:03:23.553189 extend-filesystems[1432]: Found loop5 Mar 7 01:03:23.553189 extend-filesystems[1432]: Found loop6 Mar 7 01:03:23.553189 extend-filesystems[1432]: Found loop7 Mar 7 01:03:23.553189 extend-filesystems[1432]: Found sda Mar 7 01:03:23.553189 extend-filesystems[1432]: Found sda1 Mar 7 01:03:23.553189 extend-filesystems[1432]: Found sda2 Mar 7 01:03:23.553189 extend-filesystems[1432]: Found sda3 Mar 7 01:03:23.553189 extend-filesystems[1432]: Found usr Mar 7 01:03:23.553189 extend-filesystems[1432]: Found sda4 Mar 7 01:03:23.553189 extend-filesystems[1432]: Found sda6 Mar 7 01:03:23.553189 extend-filesystems[1432]: Found sda7 Mar 7 01:03:23.553189 extend-filesystems[1432]: Found sda9 Mar 7 01:03:23.553189 extend-filesystems[1432]: Checking size of /dev/sda9 Mar 7 01:03:23.747262 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Mar 7 01:03:23.747319 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1278) Mar 7 01:03:23.747357 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Mar 7 01:03:23.747427 coreos-metadata[1429]: Mar 07 01:03:23.540 INFO Fetch successful Mar 7 01:03:23.551261 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:16:32 UTC 2026 (1): Starting Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: ---------------------------------------------------- Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: ntp-4 is maintained by Network Time Foundation, Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: corporation. Support and training for ntp-4 are Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: available at https://www.nwtime.org/support Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: ---------------------------------------------------- Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: proto: precision = 0.075 usec (-24) Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: basedate set to 2026-02-22 Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: gps base set to 2026-02-22 (week 2407) Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: Listen normally on 3 eth0 10.128.0.69:123 Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: Listen normally on 4 lo [::1]:123 Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: bind(21) AF_INET6 fe80::4001:aff:fe80:45%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:45%2#123 Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: failed to init interface for address fe80::4001:aff:fe80:45%2 Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: Listening on routing socket on fd #21 for interface updates Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:03:23.747900 ntpd[1436]: 7 Mar 01:03:23 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:03:23.757409 extend-filesystems[1432]: Resized partition /dev/sda9 Mar 7 01:03:23.594372 dbus-daemon[1430]: [system] SELinux support is enabled Mar 7 01:03:23.594723 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:03:23.789942 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:03:23.789942 extend-filesystems[1452]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 7 01:03:23.789942 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 7 01:03:23.789942 extend-filesystems[1452]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Mar 7 01:03:23.612643 dbus-daemon[1430]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1361 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 7 01:03:23.624874 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Mar 7 01:03:23.832693 extend-filesystems[1432]: Resized filesystem in /dev/sda9 Mar 7 01:03:23.617152 ntpd[1436]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:16:32 UTC 2026 (1): Starting Mar 7 01:03:23.625689 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:03:23.617204 ntpd[1436]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 01:03:23.840902 jq[1461]: true Mar 7 01:03:23.634716 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:03:23.617221 ntpd[1436]: ---------------------------------------------------- Mar 7 01:03:23.652255 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:03:23.617235 ntpd[1436]: ntp-4 is maintained by Network Time Foundation, Mar 7 01:03:23.861399 update_engine[1459]: I20260307 01:03:23.783954 1459 main.cc:92] Flatcar Update Engine starting Mar 7 01:03:23.861399 update_engine[1459]: I20260307 01:03:23.802281 1459 update_check_scheduler.cc:74] Next update check in 8m44s Mar 7 01:03:23.670942 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:03:23.617249 ntpd[1436]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 01:03:23.687658 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:03:23.617263 ntpd[1436]: corporation. Support and training for ntp-4 are Mar 7 01:03:23.688458 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:03:23.617277 ntpd[1436]: available at https://www.nwtime.org/support Mar 7 01:03:23.688989 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:03:23.617292 ntpd[1436]: ---------------------------------------------------- Mar 7 01:03:23.690584 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:03:23.622212 ntpd[1436]: proto: precision = 0.075 usec (-24) Mar 7 01:03:23.725952 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:03:23.623411 ntpd[1436]: basedate set to 2026-02-22 Mar 7 01:03:23.726286 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:03:23.623436 ntpd[1436]: gps base set to 2026-02-22 (week 2407) Mar 7 01:03:23.765705 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:03:23.631660 ntpd[1436]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 01:03:23.768092 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:03:23.631722 ntpd[1436]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 01:03:23.633950 ntpd[1436]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 01:03:23.634047 ntpd[1436]: Listen normally on 3 eth0 10.128.0.69:123 Mar 7 01:03:23.634154 ntpd[1436]: Listen normally on 4 lo [::1]:123 Mar 7 01:03:23.634230 ntpd[1436]: bind(21) AF_INET6 fe80::4001:aff:fe80:45%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:03:23.634261 ntpd[1436]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:45%2#123 Mar 7 01:03:23.634284 ntpd[1436]: failed to init interface for address fe80::4001:aff:fe80:45%2 Mar 7 01:03:23.634329 ntpd[1436]: Listening on routing socket on fd #21 for interface updates Mar 7 01:03:23.641199 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:03:23.641240 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:03:23.812584 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:03:23.871427 jq[1467]: true Mar 7 01:03:23.875254 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:03:23.894618 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:03:23.903194 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:03:23.916461 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:03:23.916603 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:03:23.916638 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:03:23.942905 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 7 01:03:23.953213 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:03:23.953268 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:03:23.956030 tar[1464]: linux-amd64/LICENSE Mar 7 01:03:23.958317 tar[1464]: linux-amd64/helm Mar 7 01:03:23.973732 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:03:24.004583 systemd-logind[1454]: Watching system buttons on /dev/input/event2 (Power Button) Mar 7 01:03:24.004629 systemd-logind[1454]: Watching system buttons on /dev/input/event3 (Sleep Button) Mar 7 01:03:24.004660 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:03:24.015813 systemd-logind[1454]: New seat seat0. Mar 7 01:03:24.019674 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:03:24.072053 bash[1499]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:03:24.073862 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:03:24.096428 systemd[1]: Starting sshkeys.service... Mar 7 01:03:24.169045 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 7 01:03:24.172239 dbus-daemon[1430]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1485 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 7 01:03:24.180230 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 7 01:03:24.199386 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:03:24.222506 systemd[1]: Starting polkit.service - Authorization Manager... Mar 7 01:03:24.240720 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 01:03:24.257127 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 01:03:24.391480 polkitd[1503]: Started polkitd version 121 Mar 7 01:03:24.434302 polkitd[1503]: Loading rules from directory /etc/polkit-1/rules.d Mar 7 01:03:24.436158 polkitd[1503]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 7 01:03:24.446518 coreos-metadata[1504]: Mar 07 01:03:24.446 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Mar 7 01:03:24.447178 polkitd[1503]: Finished loading, compiling and executing 2 rules Mar 7 01:03:24.448619 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 7 01:03:24.448877 systemd[1]: Started polkit.service - Authorization Manager. Mar 7 01:03:24.450790 polkitd[1503]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 7 01:03:24.455103 coreos-metadata[1504]: Mar 07 01:03:24.454 INFO Fetch failed with 404: resource not found Mar 7 01:03:24.455103 coreos-metadata[1504]: Mar 07 01:03:24.454 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Mar 7 01:03:24.455688 coreos-metadata[1504]: Mar 07 01:03:24.455 INFO Fetch successful Mar 7 01:03:24.455688 coreos-metadata[1504]: Mar 07 01:03:24.455 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Mar 7 01:03:24.462031 coreos-metadata[1504]: Mar 07 01:03:24.458 INFO Fetch failed with 404: resource not found Mar 7 01:03:24.462031 coreos-metadata[1504]: Mar 07 01:03:24.458 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Mar 7 01:03:24.464231 coreos-metadata[1504]: Mar 07 01:03:24.464 INFO Fetch failed with 404: resource not found Mar 7 01:03:24.464231 coreos-metadata[1504]: Mar 07 01:03:24.464 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Mar 7 01:03:24.465208 coreos-metadata[1504]: Mar 07 01:03:24.464 INFO Fetch successful Mar 7 01:03:24.474955 unknown[1504]: wrote ssh authorized keys file for user: core Mar 7 01:03:24.539909 systemd-hostnamed[1485]: Hostname set to (transient) Mar 7 01:03:24.540874 systemd-resolved[1364]: System hostname changed to 'ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521'. Mar 7 01:03:24.542003 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:03:24.566086 update-ssh-keys[1524]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:03:24.569136 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 01:03:24.585094 systemd[1]: Finished sshkeys.service. Mar 7 01:03:24.617768 ntpd[1436]: bind(24) AF_INET6 fe80::4001:aff:fe80:45%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:03:24.618342 ntpd[1436]: 7 Mar 01:03:24 ntpd[1436]: bind(24) AF_INET6 fe80::4001:aff:fe80:45%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:03:24.618440 ntpd[1436]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:45%2#123 Mar 7 01:03:24.618570 ntpd[1436]: 7 Mar 01:03:24 ntpd[1436]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:45%2#123 Mar 7 01:03:24.618636 ntpd[1436]: failed to init interface for address fe80::4001:aff:fe80:45%2 Mar 7 01:03:24.619451 ntpd[1436]: 7 Mar 01:03:24 ntpd[1436]: failed to init interface for address fe80::4001:aff:fe80:45%2 Mar 7 01:03:24.660099 containerd[1476]: time="2026-03-07T01:03:24.659911751Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:03:24.773816 containerd[1476]: time="2026-03-07T01:03:24.773737442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:03:24.784302 containerd[1476]: time="2026-03-07T01:03:24.784094528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:03:24.784302 containerd[1476]: time="2026-03-07T01:03:24.784155915Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:03:24.784302 containerd[1476]: time="2026-03-07T01:03:24.784185180Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:03:24.784519 containerd[1476]: time="2026-03-07T01:03:24.784416738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:03:24.784519 containerd[1476]: time="2026-03-07T01:03:24.784446231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:03:24.784633 containerd[1476]: time="2026-03-07T01:03:24.784545180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:03:24.784633 containerd[1476]: time="2026-03-07T01:03:24.784566426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:03:24.785363 containerd[1476]: time="2026-03-07T01:03:24.784869541Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:03:24.785363 containerd[1476]: time="2026-03-07T01:03:24.784906754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:03:24.785363 containerd[1476]: time="2026-03-07T01:03:24.784933921Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:03:24.785363 containerd[1476]: time="2026-03-07T01:03:24.784951832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:03:24.785363 containerd[1476]: time="2026-03-07T01:03:24.785106311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:03:24.786056 containerd[1476]: time="2026-03-07T01:03:24.785441752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:03:24.786056 containerd[1476]: time="2026-03-07T01:03:24.785657009Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:03:24.786056 containerd[1476]: time="2026-03-07T01:03:24.785684479Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:03:24.786056 containerd[1476]: time="2026-03-07T01:03:24.785835773Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:03:24.786056 containerd[1476]: time="2026-03-07T01:03:24.785914010Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:03:24.795974 containerd[1476]: time="2026-03-07T01:03:24.795182908Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:03:24.795974 containerd[1476]: time="2026-03-07T01:03:24.795246809Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:03:24.795974 containerd[1476]: time="2026-03-07T01:03:24.795272992Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:03:24.795974 containerd[1476]: time="2026-03-07T01:03:24.795297856Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:03:24.795974 containerd[1476]: time="2026-03-07T01:03:24.795322606Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:03:24.795974 containerd[1476]: time="2026-03-07T01:03:24.795513287Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:03:24.795974 containerd[1476]: time="2026-03-07T01:03:24.795919066Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:03:24.796375 containerd[1476]: time="2026-03-07T01:03:24.796112505Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:03:24.796375 containerd[1476]: time="2026-03-07T01:03:24.796141981Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:03:24.796375 containerd[1476]: time="2026-03-07T01:03:24.796165061Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:03:24.796375 containerd[1476]: time="2026-03-07T01:03:24.796199589Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:03:24.796375 containerd[1476]: time="2026-03-07T01:03:24.796223215Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:03:24.796375 containerd[1476]: time="2026-03-07T01:03:24.796246279Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:03:24.796375 containerd[1476]: time="2026-03-07T01:03:24.796271479Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:03:24.796375 containerd[1476]: time="2026-03-07T01:03:24.796296281Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:03:24.796375 containerd[1476]: time="2026-03-07T01:03:24.796318866Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:03:24.796375 containerd[1476]: time="2026-03-07T01:03:24.796340038Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:03:24.796375 containerd[1476]: time="2026-03-07T01:03:24.796362532Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:03:24.796832 containerd[1476]: time="2026-03-07T01:03:24.796393638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.796832 containerd[1476]: time="2026-03-07T01:03:24.796419400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.796832 containerd[1476]: time="2026-03-07T01:03:24.796440605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.796832 containerd[1476]: time="2026-03-07T01:03:24.796483022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.796832 containerd[1476]: time="2026-03-07T01:03:24.796506254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.796832 containerd[1476]: time="2026-03-07T01:03:24.796528687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.796832 containerd[1476]: time="2026-03-07T01:03:24.796550440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.796832 containerd[1476]: time="2026-03-07T01:03:24.796583138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.796832 containerd[1476]: time="2026-03-07T01:03:24.796608576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.796832 containerd[1476]: time="2026-03-07T01:03:24.796634708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.796832 containerd[1476]: time="2026-03-07T01:03:24.796656521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.796832 containerd[1476]: time="2026-03-07T01:03:24.796686590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.796832 containerd[1476]: time="2026-03-07T01:03:24.796715058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.796832 containerd[1476]: time="2026-03-07T01:03:24.796742725Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:03:24.796832 containerd[1476]: time="2026-03-07T01:03:24.796776382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.797548 containerd[1476]: time="2026-03-07T01:03:24.796811362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.797548 containerd[1476]: time="2026-03-07T01:03:24.796830952Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:03:24.797548 containerd[1476]: time="2026-03-07T01:03:24.796890303Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:03:24.797548 containerd[1476]: time="2026-03-07T01:03:24.796921448Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:03:24.797548 containerd[1476]: time="2026-03-07T01:03:24.796940605Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:03:24.797548 containerd[1476]: time="2026-03-07T01:03:24.796962609Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:03:24.797548 containerd[1476]: time="2026-03-07T01:03:24.796980311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.802814 containerd[1476]: time="2026-03-07T01:03:24.797000823Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:03:24.802814 containerd[1476]: time="2026-03-07T01:03:24.800063263Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:03:24.802814 containerd[1476]: time="2026-03-07T01:03:24.800091167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:03:24.803481 containerd[1476]: time="2026-03-07T01:03:24.800572191Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:03:24.803481 containerd[1476]: time="2026-03-07T01:03:24.800722007Z" level=info msg="Connect containerd service" Mar 7 01:03:24.803481 containerd[1476]: time="2026-03-07T01:03:24.800779225Z" level=info msg="using legacy CRI server" Mar 7 01:03:24.803481 containerd[1476]: time="2026-03-07T01:03:24.800801805Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:03:24.803481 containerd[1476]: time="2026-03-07T01:03:24.800951075Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:03:24.803481 containerd[1476]: time="2026-03-07T01:03:24.801887015Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:03:24.803481 containerd[1476]: time="2026-03-07T01:03:24.802269428Z" level=info msg="Start subscribing containerd event" Mar 7 01:03:24.803481 containerd[1476]: time="2026-03-07T01:03:24.802347599Z" level=info msg="Start recovering state" Mar 7 01:03:24.803481 containerd[1476]: time="2026-03-07T01:03:24.802433286Z" level=info msg="Start event monitor" Mar 7 01:03:24.803481 containerd[1476]: time="2026-03-07T01:03:24.802453753Z" level=info msg="Start snapshots syncer" Mar 7 01:03:24.803481 containerd[1476]: time="2026-03-07T01:03:24.802467496Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:03:24.803481 containerd[1476]: time="2026-03-07T01:03:24.802480492Z" level=info msg="Start streaming server" Mar 7 01:03:24.806309 containerd[1476]: time="2026-03-07T01:03:24.806277443Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:03:24.807100 containerd[1476]: time="2026-03-07T01:03:24.806367482Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:03:24.807100 containerd[1476]: time="2026-03-07T01:03:24.806469312Z" level=info msg="containerd successfully booted in 0.150141s" Mar 7 01:03:24.808173 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:03:25.056232 systemd-networkd[1361]: eth0: Gained IPv6LL Mar 7 01:03:25.063660 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:03:25.075860 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:03:25.095335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:03:25.113059 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:03:25.132174 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Mar 7 01:03:25.171047 init.sh[1534]: + '[' -e /etc/default/instance_configs.cfg.template ']' Mar 7 01:03:25.171047 init.sh[1534]: + echo -e '[InstanceSetup]\nset_host_keys = false' Mar 7 01:03:25.173044 init.sh[1534]: + /usr/bin/google_instance_setup Mar 7 01:03:25.192591 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:03:25.325509 tar[1464]: linux-amd64/README.md Mar 7 01:03:25.351822 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:03:25.370192 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:03:25.420190 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:03:25.439533 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:03:25.459405 systemd[1]: Started sshd@0-10.128.0.69:22-68.220.241.50:55964.service - OpenSSH per-connection server daemon (68.220.241.50:55964). Mar 7 01:03:25.471555 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:03:25.471865 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:03:25.499627 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:03:25.541628 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:03:25.565521 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:03:25.584507 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:03:25.594465 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:03:25.796103 sshd[1556]: Accepted publickey for core from 68.220.241.50 port 55964 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:03:25.803580 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:03:25.826809 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:03:25.843529 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:03:25.859932 systemd-logind[1454]: New session 1 of user core. Mar 7 01:03:25.896915 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:03:25.918488 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:03:25.930205 instance-setup[1540]: INFO Running google_set_multiqueue. Mar 7 01:03:25.955937 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:03:25.960416 instance-setup[1540]: INFO Set channels for eth0 to 2. Mar 7 01:03:25.966868 instance-setup[1540]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Mar 7 01:03:25.969320 instance-setup[1540]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Mar 7 01:03:25.969402 instance-setup[1540]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Mar 7 01:03:25.971963 instance-setup[1540]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Mar 7 01:03:25.972073 instance-setup[1540]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Mar 7 01:03:25.974060 instance-setup[1540]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Mar 7 01:03:25.976144 instance-setup[1540]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Mar 7 01:03:25.976645 instance-setup[1540]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Mar 7 01:03:25.987357 instance-setup[1540]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 7 01:03:25.991972 instance-setup[1540]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 7 01:03:25.996158 instance-setup[1540]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Mar 7 01:03:25.996215 instance-setup[1540]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Mar 7 01:03:26.026688 init.sh[1534]: + /usr/bin/google_metadata_script_runner --script-type startup Mar 7 01:03:26.190779 systemd[1572]: Queued start job for default target default.target. Mar 7 01:03:26.204901 systemd[1572]: Created slice app.slice - User Application Slice. Mar 7 01:03:26.204962 systemd[1572]: Reached target paths.target - Paths. Mar 7 01:03:26.204986 systemd[1572]: Reached target timers.target - Timers. Mar 7 01:03:26.215164 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:03:26.233656 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:03:26.233921 systemd[1572]: Reached target sockets.target - Sockets. Mar 7 01:03:26.233952 systemd[1572]: Reached target basic.target - Basic System. Mar 7 01:03:26.234196 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:03:26.234505 systemd[1572]: Reached target default.target - Main User Target. Mar 7 01:03:26.234579 systemd[1572]: Startup finished in 264ms. Mar 7 01:03:26.254261 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:03:26.276705 startup-script[1603]: INFO Starting startup scripts. Mar 7 01:03:26.283282 startup-script[1603]: INFO No startup scripts found in metadata. Mar 7 01:03:26.283372 startup-script[1603]: INFO Finished running startup scripts. Mar 7 01:03:26.305690 init.sh[1534]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Mar 7 01:03:26.305690 init.sh[1534]: + daemon_pids=() Mar 7 01:03:26.308526 init.sh[1534]: + for d in accounts clock_skew network Mar 7 01:03:26.308526 init.sh[1534]: + daemon_pids+=($!) Mar 7 01:03:26.308526 init.sh[1534]: + for d in accounts clock_skew network Mar 7 01:03:26.308526 init.sh[1534]: + daemon_pids+=($!) Mar 7 01:03:26.308526 init.sh[1534]: + for d in accounts clock_skew network Mar 7 01:03:26.308526 init.sh[1534]: + daemon_pids+=($!) Mar 7 01:03:26.308526 init.sh[1534]: + NOTIFY_SOCKET=/run/systemd/notify Mar 7 01:03:26.308526 init.sh[1534]: + /usr/bin/systemd-notify --ready Mar 7 01:03:26.308946 init.sh[1610]: + /usr/bin/google_clock_skew_daemon Mar 7 01:03:26.309555 init.sh[1611]: + /usr/bin/google_network_daemon Mar 7 01:03:26.309809 init.sh[1609]: + /usr/bin/google_accounts_daemon Mar 7 01:03:26.321175 systemd[1]: Started oem-gce.service - GCE Linux Agent. Mar 7 01:03:26.333485 init.sh[1534]: + wait -n 1609 1610 1611 Mar 7 01:03:26.490452 systemd[1]: Started sshd@1-10.128.0.69:22-68.220.241.50:55972.service - OpenSSH per-connection server daemon (68.220.241.50:55972). Mar 7 01:03:26.783881 google-clock-skew[1610]: INFO Starting Google Clock Skew daemon. Mar 7 01:03:26.790768 google-clock-skew[1610]: INFO Clock drift token has changed: 0. Mar 7 01:03:26.801994 google-networking[1611]: INFO Starting Google Networking daemon. Mar 7 01:03:26.809304 sshd[1615]: Accepted publickey for core from 68.220.241.50 port 55972 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:03:26.814815 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:03:26.825712 systemd-logind[1454]: New session 2 of user core. Mar 7 01:03:26.833262 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:03:26.914199 groupadd[1626]: group added to /etc/group: name=google-sudoers, GID=1000 Mar 7 01:03:26.920359 groupadd[1626]: group added to /etc/gshadow: name=google-sudoers Mar 7 01:03:26.988446 groupadd[1626]: new group: name=google-sudoers, GID=1000 Mar 7 01:03:27.007236 sshd[1615]: pam_unix(sshd:session): session closed for user core Mar 7 01:03:27.014261 systemd[1]: sshd@1-10.128.0.69:22-68.220.241.50:55972.service: Deactivated successfully. Mar 7 01:03:27.016997 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:03:27.019251 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:03:27.022051 systemd-logind[1454]: Removed session 2. Mar 7 01:03:27.028406 google-accounts[1609]: INFO Starting Google Accounts daemon. Mar 7 01:03:27.044663 google-accounts[1609]: WARNING OS Login not installed. Mar 7 01:03:27.048827 google-accounts[1609]: INFO Creating a new user account for 0. Mar 7 01:03:27.053507 systemd[1]: Started sshd@2-10.128.0.69:22-68.220.241.50:55974.service - OpenSSH per-connection server daemon (68.220.241.50:55974). Mar 7 01:03:27.058566 init.sh[1639]: useradd: invalid user name '0': use --badname to ignore Mar 7 01:03:27.058383 google-accounts[1609]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Mar 7 01:03:27.273072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:03:27.289306 sshd[1638]: Accepted publickey for core from 68.220.241.50 port 55974 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:03:27.290613 (kubelet)[1647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:03:27.290795 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:03:27.292501 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:03:27.300730 systemd[1]: Startup finished in 1.068s (kernel) + 9.790s (initrd) + 9.811s (userspace) = 20.670s. Mar 7 01:03:27.318401 systemd-logind[1454]: New session 3 of user core. Mar 7 01:03:27.325329 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:03:27.482535 sshd[1638]: pam_unix(sshd:session): session closed for user core Mar 7 01:03:27.489435 systemd[1]: sshd@2-10.128.0.69:22-68.220.241.50:55974.service: Deactivated successfully. Mar 7 01:03:27.491988 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:03:27.494502 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:03:27.495938 systemd-logind[1454]: Removed session 3. Mar 7 01:03:27.617827 ntpd[1436]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:45%2]:123 Mar 7 01:03:27.618902 ntpd[1436]: 7 Mar 01:03:27 ntpd[1436]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:45%2]:123 Mar 7 01:03:28.000533 google-clock-skew[1610]: INFO Synced system time with hardware clock. Mar 7 01:03:28.000941 systemd-resolved[1364]: Clock change detected. Flushing caches. Mar 7 01:03:28.396290 kubelet[1647]: E0307 01:03:28.396122 1647 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:03:28.399354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:03:28.399634 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:03:28.400265 systemd[1]: kubelet.service: Consumed 1.189s CPU time. Mar 7 01:03:37.874724 systemd[1]: Started sshd@3-10.128.0.69:22-68.220.241.50:53716.service - OpenSSH per-connection server daemon (68.220.241.50:53716). Mar 7 01:03:38.088276 sshd[1663]: Accepted publickey for core from 68.220.241.50 port 53716 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:03:38.089169 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:03:38.094673 systemd-logind[1454]: New session 4 of user core. Mar 7 01:03:38.106616 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:03:38.256115 sshd[1663]: pam_unix(sshd:session): session closed for user core Mar 7 01:03:38.261493 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:03:38.262327 systemd[1]: sshd@3-10.128.0.69:22-68.220.241.50:53716.service: Deactivated successfully. Mar 7 01:03:38.265103 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:03:38.266625 systemd-logind[1454]: Removed session 4. Mar 7 01:03:38.301739 systemd[1]: Started sshd@4-10.128.0.69:22-68.220.241.50:53720.service - OpenSSH per-connection server daemon (68.220.241.50:53720). Mar 7 01:03:38.479313 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:03:38.487676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:03:38.526924 sshd[1670]: Accepted publickey for core from 68.220.241.50 port 53720 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:03:38.528058 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:03:38.539496 systemd-logind[1454]: New session 5 of user core. Mar 7 01:03:38.550635 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:03:38.692019 sshd[1670]: pam_unix(sshd:session): session closed for user core Mar 7 01:03:38.699981 systemd[1]: sshd@4-10.128.0.69:22-68.220.241.50:53720.service: Deactivated successfully. Mar 7 01:03:38.702290 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:03:38.703581 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:03:38.705756 systemd-logind[1454]: Removed session 5. Mar 7 01:03:38.741749 systemd[1]: Started sshd@5-10.128.0.69:22-68.220.241.50:53736.service - OpenSSH per-connection server daemon (68.220.241.50:53736). Mar 7 01:03:38.833114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:03:38.843880 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:03:38.893915 kubelet[1687]: E0307 01:03:38.893848 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:03:38.898400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:03:38.898678 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:03:39.003359 sshd[1680]: Accepted publickey for core from 68.220.241.50 port 53736 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:03:39.004147 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:03:39.010594 systemd-logind[1454]: New session 6 of user core. Mar 7 01:03:39.020580 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:03:39.191740 sshd[1680]: pam_unix(sshd:session): session closed for user core Mar 7 01:03:39.197651 systemd[1]: sshd@5-10.128.0.69:22-68.220.241.50:53736.service: Deactivated successfully. Mar 7 01:03:39.200225 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:03:39.201367 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:03:39.203185 systemd-logind[1454]: Removed session 6. Mar 7 01:03:39.232746 systemd[1]: Started sshd@6-10.128.0.69:22-68.220.241.50:53750.service - OpenSSH per-connection server daemon (68.220.241.50:53750). Mar 7 01:03:39.452484 sshd[1699]: Accepted publickey for core from 68.220.241.50 port 53750 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:03:39.453826 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:03:39.460191 systemd-logind[1454]: New session 7 of user core. Mar 7 01:03:39.471597 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:03:39.610907 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:03:39.611455 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:03:39.628188 sudo[1702]: pam_unix(sudo:session): session closed for user root Mar 7 01:03:39.659856 sshd[1699]: pam_unix(sshd:session): session closed for user core Mar 7 01:03:39.665558 systemd[1]: sshd@6-10.128.0.69:22-68.220.241.50:53750.service: Deactivated successfully. Mar 7 01:03:39.667940 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:03:39.668978 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:03:39.670700 systemd-logind[1454]: Removed session 7. Mar 7 01:03:39.703731 systemd[1]: Started sshd@7-10.128.0.69:22-68.220.241.50:53754.service - OpenSSH per-connection server daemon (68.220.241.50:53754). Mar 7 01:03:39.928676 sshd[1707]: Accepted publickey for core from 68.220.241.50 port 53754 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:03:39.930521 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:03:39.936802 systemd-logind[1454]: New session 8 of user core. Mar 7 01:03:39.943551 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:03:40.077352 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:03:40.077862 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:03:40.083262 sudo[1711]: pam_unix(sudo:session): session closed for user root Mar 7 01:03:40.096866 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:03:40.097403 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:03:40.119785 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:03:40.122221 auditctl[1714]: No rules Mar 7 01:03:40.122807 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:03:40.123075 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:03:40.126574 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:03:40.172462 augenrules[1732]: No rules Mar 7 01:03:40.173632 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:03:40.175074 sudo[1710]: pam_unix(sudo:session): session closed for user root Mar 7 01:03:40.208430 sshd[1707]: pam_unix(sshd:session): session closed for user core Mar 7 01:03:40.213045 systemd[1]: sshd@7-10.128.0.69:22-68.220.241.50:53754.service: Deactivated successfully. Mar 7 01:03:40.215841 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:03:40.217748 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:03:40.219501 systemd-logind[1454]: Removed session 8. Mar 7 01:03:40.255719 systemd[1]: Started sshd@8-10.128.0.69:22-68.220.241.50:53766.service - OpenSSH per-connection server daemon (68.220.241.50:53766). Mar 7 01:03:40.469346 sshd[1740]: Accepted publickey for core from 68.220.241.50 port 53766 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:03:40.470147 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:03:40.476358 systemd-logind[1454]: New session 9 of user core. Mar 7 01:03:40.485644 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:03:40.613360 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:03:40.613892 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:03:41.049707 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:03:41.052829 (dockerd)[1758]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:03:41.497842 dockerd[1758]: time="2026-03-07T01:03:41.497754152Z" level=info msg="Starting up" Mar 7 01:03:41.649167 dockerd[1758]: time="2026-03-07T01:03:41.649112870Z" level=info msg="Loading containers: start." Mar 7 01:03:41.797359 kernel: Initializing XFRM netlink socket Mar 7 01:03:41.909716 systemd-networkd[1361]: docker0: Link UP Mar 7 01:03:41.926138 dockerd[1758]: time="2026-03-07T01:03:41.926076850Z" level=info msg="Loading containers: done." Mar 7 01:03:41.944310 dockerd[1758]: time="2026-03-07T01:03:41.943712471Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:03:41.944310 dockerd[1758]: time="2026-03-07T01:03:41.943858843Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:03:41.944310 dockerd[1758]: time="2026-03-07T01:03:41.944004046Z" level=info msg="Daemon has completed initialization" Mar 7 01:03:41.944806 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3241507646-merged.mount: Deactivated successfully. Mar 7 01:03:41.982204 dockerd[1758]: time="2026-03-07T01:03:41.982006043Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:03:41.982714 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:03:42.779952 containerd[1476]: time="2026-03-07T01:03:42.779893285Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 7 01:03:43.315266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167521396.mount: Deactivated successfully. Mar 7 01:03:43.800173 systemd[1]: Started sshd@9-10.128.0.69:22-103.213.116.242:49814.service - OpenSSH per-connection server daemon (103.213.116.242:49814). Mar 7 01:03:44.781809 containerd[1476]: time="2026-03-07T01:03:44.781733724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:44.783497 containerd[1476]: time="2026-03-07T01:03:44.783433035Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27075928" Mar 7 01:03:44.784648 containerd[1476]: time="2026-03-07T01:03:44.784578256Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:44.788571 containerd[1476]: time="2026-03-07T01:03:44.788289127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:44.790347 containerd[1476]: time="2026-03-07T01:03:44.789709615Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 2.009769409s" Mar 7 01:03:44.790347 containerd[1476]: time="2026-03-07T01:03:44.789761541Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 7 01:03:44.790518 containerd[1476]: time="2026-03-07T01:03:44.790394111Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 7 01:03:46.269503 containerd[1476]: time="2026-03-07T01:03:46.269424925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:46.271268 containerd[1476]: time="2026-03-07T01:03:46.271186889Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21166069" Mar 7 01:03:46.272585 containerd[1476]: time="2026-03-07T01:03:46.272299647Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:46.276890 containerd[1476]: time="2026-03-07T01:03:46.276844869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:46.279218 containerd[1476]: time="2026-03-07T01:03:46.278986870Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.488553767s" Mar 7 01:03:46.279218 containerd[1476]: time="2026-03-07T01:03:46.279041929Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 7 01:03:46.280397 containerd[1476]: time="2026-03-07T01:03:46.280016742Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 7 01:03:47.507902 containerd[1476]: time="2026-03-07T01:03:47.507834310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:47.509523 containerd[1476]: time="2026-03-07T01:03:47.509463393Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15730052" Mar 7 01:03:47.510830 containerd[1476]: time="2026-03-07T01:03:47.510757059Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:47.514488 containerd[1476]: time="2026-03-07T01:03:47.514419571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:47.516117 containerd[1476]: time="2026-03-07T01:03:47.515939595Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.235878852s" Mar 7 01:03:47.516117 containerd[1476]: time="2026-03-07T01:03:47.515992758Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 7 01:03:47.517194 containerd[1476]: time="2026-03-07T01:03:47.516834211Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 7 01:03:48.672008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3384126535.mount: Deactivated successfully. Mar 7 01:03:49.096609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:03:49.106784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:03:49.141452 containerd[1476]: time="2026-03-07T01:03:49.141376257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:49.172966 containerd[1476]: time="2026-03-07T01:03:49.172838090Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25862097" Mar 7 01:03:49.233375 containerd[1476]: time="2026-03-07T01:03:49.231863566Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:49.356181 containerd[1476]: time="2026-03-07T01:03:49.356028157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:49.358599 containerd[1476]: time="2026-03-07T01:03:49.358541722Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.841661845s" Mar 7 01:03:49.358816 containerd[1476]: time="2026-03-07T01:03:49.358784184Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 7 01:03:49.359553 containerd[1476]: time="2026-03-07T01:03:49.359523152Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 7 01:03:49.406854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:03:49.414497 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:03:49.471150 kubelet[1981]: E0307 01:03:49.470944 1981 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:03:49.474367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:03:49.474646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:03:49.952618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1140856482.mount: Deactivated successfully. Mar 7 01:03:51.172557 containerd[1476]: time="2026-03-07T01:03:51.172470595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:51.174254 containerd[1476]: time="2026-03-07T01:03:51.174192274Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22389461" Mar 7 01:03:51.175696 containerd[1476]: time="2026-03-07T01:03:51.175627433Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:51.180077 containerd[1476]: time="2026-03-07T01:03:51.179784023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:51.182583 containerd[1476]: time="2026-03-07T01:03:51.181358111Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.821387928s" Mar 7 01:03:51.182583 containerd[1476]: time="2026-03-07T01:03:51.181410587Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 7 01:03:51.182583 containerd[1476]: time="2026-03-07T01:03:51.182206971Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 01:03:51.555274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2652944737.mount: Deactivated successfully. Mar 7 01:03:51.563815 containerd[1476]: time="2026-03-07T01:03:51.563749329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:51.565168 containerd[1476]: time="2026-03-07T01:03:51.565093798Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321428" Mar 7 01:03:51.566629 containerd[1476]: time="2026-03-07T01:03:51.566439446Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:51.569732 containerd[1476]: time="2026-03-07T01:03:51.569653888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:51.570944 containerd[1476]: time="2026-03-07T01:03:51.570895152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 388.615802ms" Mar 7 01:03:51.571084 containerd[1476]: time="2026-03-07T01:03:51.570954381Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 01:03:51.571985 containerd[1476]: time="2026-03-07T01:03:51.571828808Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 7 01:03:51.966798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2308317757.mount: Deactivated successfully. Mar 7 01:03:53.111629 containerd[1476]: time="2026-03-07T01:03:53.111553455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:53.113547 containerd[1476]: time="2026-03-07T01:03:53.113470680Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22861753" Mar 7 01:03:53.115818 containerd[1476]: time="2026-03-07T01:03:53.115757096Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:53.120927 containerd[1476]: time="2026-03-07T01:03:53.120871200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:53.123067 containerd[1476]: time="2026-03-07T01:03:53.122869668Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.550989937s" Mar 7 01:03:53.123067 containerd[1476]: time="2026-03-07T01:03:53.122926847Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 7 01:03:54.870734 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:03:54.881737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:03:54.925238 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 7 01:03:54.948696 systemd[1]: Reloading requested from client PID 2139 ('systemctl') (unit session-9.scope)... Mar 7 01:03:54.948718 systemd[1]: Reloading... Mar 7 01:03:55.135394 zram_generator::config[2181]: No configuration found. Mar 7 01:03:55.284454 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:03:55.389093 systemd[1]: Reloading finished in 439 ms. Mar 7 01:03:55.457406 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:03:55.457775 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:03:55.458237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:03:55.465850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:03:55.767544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:03:55.781084 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:03:55.845368 kubelet[2233]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:03:55.845368 kubelet[2233]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:03:55.845368 kubelet[2233]: I0307 01:03:55.845123 2233 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:03:56.521789 kubelet[2233]: I0307 01:03:56.521736 2233 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:03:56.522146 kubelet[2233]: I0307 01:03:56.521985 2233 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:03:56.525536 kubelet[2233]: I0307 01:03:56.524956 2233 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:03:56.525536 kubelet[2233]: I0307 01:03:56.525128 2233 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:03:56.525536 kubelet[2233]: I0307 01:03:56.525503 2233 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:03:56.534162 kubelet[2233]: E0307 01:03:56.534088 2233 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:03:56.534862 kubelet[2233]: I0307 01:03:56.534822 2233 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:03:56.541733 kubelet[2233]: E0307 01:03:56.541690 2233 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:03:56.541891 kubelet[2233]: I0307 01:03:56.541768 2233 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:03:56.545668 kubelet[2233]: I0307 01:03:56.545630 2233 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:03:56.547015 kubelet[2233]: I0307 01:03:56.546957 2233 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:03:56.547257 kubelet[2233]: I0307 01:03:56.547007 2233 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:03:56.547257 kubelet[2233]: I0307 01:03:56.547253 2233 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:03:56.547540 kubelet[2233]: I0307 01:03:56.547272 2233 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:03:56.547540 kubelet[2233]: I0307 01:03:56.547425 2233 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:03:56.549902 kubelet[2233]: I0307 01:03:56.549867 2233 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:03:56.550314 kubelet[2233]: I0307 01:03:56.550137 2233 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:03:56.550314 kubelet[2233]: I0307 01:03:56.550176 2233 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:03:56.550314 kubelet[2233]: I0307 01:03:56.550212 2233 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:03:56.550314 kubelet[2233]: I0307 01:03:56.550234 2233 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:03:56.551030 kubelet[2233]: E0307 01:03:56.550978 2233 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521&limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:03:56.553449 kubelet[2233]: I0307 01:03:56.553417 2233 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:03:56.555364 kubelet[2233]: I0307 01:03:56.554472 2233 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:03:56.555364 kubelet[2233]: I0307 01:03:56.554526 2233 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:03:56.555364 kubelet[2233]: W0307 01:03:56.554596 2233 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:03:56.563303 kubelet[2233]: E0307 01:03:56.563256 2233 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:03:56.572019 kubelet[2233]: I0307 01:03:56.571789 2233 server.go:1262] "Started kubelet" Mar 7 01:03:56.579984 kubelet[2233]: I0307 01:03:56.579949 2233 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:03:56.585171 kubelet[2233]: I0307 01:03:56.585113 2233 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:03:56.590170 kubelet[2233]: I0307 01:03:56.590125 2233 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:03:56.603503 kubelet[2233]: I0307 01:03:56.603465 2233 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:03:56.605368 kubelet[2233]: I0307 01:03:56.603972 2233 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:03:56.605368 kubelet[2233]: I0307 01:03:56.595186 2233 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:03:56.605368 kubelet[2233]: E0307 01:03:56.595444 2233 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" not found" Mar 7 01:03:56.605368 kubelet[2233]: I0307 01:03:56.595980 2233 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:03:56.605368 kubelet[2233]: E0307 01:03:56.600162 2233 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:03:56.605368 kubelet[2233]: E0307 01:03:56.600271 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521?timeout=10s\": dial tcp 10.128.0.69:6443: connect: connection refused" interval="200ms" Mar 7 01:03:56.605760 kubelet[2233]: E0307 01:03:56.600500 2233 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.69:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.69:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521.189a69881887dfb1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521,UID:ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521,},FirstTimestamp:2026-03-07 01:03:56.571738033 +0000 UTC m=+0.785455108,LastTimestamp:2026-03-07 01:03:56.571738033 +0000 UTC m=+0.785455108,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521,}" Mar 7 01:03:56.605760 kubelet[2233]: I0307 01:03:56.594646 2233 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:03:56.605760 kubelet[2233]: I0307 01:03:56.595168 2233 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:03:56.605982 kubelet[2233]: I0307 01:03:56.605782 2233 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:03:56.606462 kubelet[2233]: I0307 01:03:56.606432 2233 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:03:56.606563 kubelet[2233]: I0307 01:03:56.606540 2233 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:03:56.608499 kubelet[2233]: I0307 01:03:56.608471 2233 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:03:56.624463 kubelet[2233]: I0307 01:03:56.624408 2233 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:03:56.626608 kubelet[2233]: I0307 01:03:56.626568 2233 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:03:56.626608 kubelet[2233]: I0307 01:03:56.626606 2233 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:03:56.626798 kubelet[2233]: I0307 01:03:56.626646 2233 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:03:56.626798 kubelet[2233]: E0307 01:03:56.626711 2233 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:03:56.636456 kubelet[2233]: E0307 01:03:56.636391 2233 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:03:56.641614 kubelet[2233]: E0307 01:03:56.641184 2233 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:03:56.649774 kubelet[2233]: I0307 01:03:56.649737 2233 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:03:56.649774 kubelet[2233]: I0307 01:03:56.649766 2233 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:03:56.649994 kubelet[2233]: I0307 01:03:56.649790 2233 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:03:56.652376 kubelet[2233]: I0307 01:03:56.652313 2233 policy_none.go:49] "None policy: Start" Mar 7 01:03:56.652376 kubelet[2233]: I0307 01:03:56.652364 2233 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:03:56.652376 kubelet[2233]: I0307 01:03:56.652384 2233 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:03:56.654195 kubelet[2233]: I0307 01:03:56.654135 2233 policy_none.go:47] "Start" Mar 7 01:03:56.659962 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:03:56.674623 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:03:56.678669 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:03:56.684537 kubelet[2233]: E0307 01:03:56.684488 2233 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:03:56.684803 kubelet[2233]: I0307 01:03:56.684768 2233 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:03:56.684890 kubelet[2233]: I0307 01:03:56.684791 2233 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:03:56.685709 kubelet[2233]: I0307 01:03:56.685654 2233 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:03:56.688105 kubelet[2233]: E0307 01:03:56.687696 2233 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:03:56.688105 kubelet[2233]: E0307 01:03:56.687751 2233 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" not found" Mar 7 01:03:56.752940 systemd[1]: Created slice kubepods-burstable-podcf96df604368386847f1351c533fffc4.slice - libcontainer container kubepods-burstable-podcf96df604368386847f1351c533fffc4.slice. Mar 7 01:03:56.763657 kubelet[2233]: E0307 01:03:56.763371 2233 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" not found" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:56.771513 systemd[1]: Created slice kubepods-burstable-pod3053446e17301c4661e36be49b5b9761.slice - libcontainer container kubepods-burstable-pod3053446e17301c4661e36be49b5b9761.slice. Mar 7 01:03:56.776718 kubelet[2233]: E0307 01:03:56.776449 2233 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" not found" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:56.779596 systemd[1]: Created slice kubepods-burstable-poddb3f557db313e16bcbcc25b05b224b8e.slice - libcontainer container kubepods-burstable-poddb3f557db313e16bcbcc25b05b224b8e.slice. Mar 7 01:03:56.782310 kubelet[2233]: E0307 01:03:56.782274 2233 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" not found" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:56.793275 kubelet[2233]: I0307 01:03:56.792945 2233 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:56.793465 kubelet[2233]: E0307 01:03:56.793383 2233 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.69:6443/api/v1/nodes\": dial tcp 10.128.0.69:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:56.806556 kubelet[2233]: I0307 01:03:56.805984 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf96df604368386847f1351c533fffc4-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"cf96df604368386847f1351c533fffc4\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:56.806556 kubelet[2233]: I0307 01:03:56.806065 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3053446e17301c4661e36be49b5b9761-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"3053446e17301c4661e36be49b5b9761\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:56.806556 kubelet[2233]: I0307 01:03:56.806108 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3053446e17301c4661e36be49b5b9761-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"3053446e17301c4661e36be49b5b9761\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:56.806556 kubelet[2233]: I0307 01:03:56.806137 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3053446e17301c4661e36be49b5b9761-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"3053446e17301c4661e36be49b5b9761\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:56.806889 kubelet[2233]: I0307 01:03:56.806192 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db3f557db313e16bcbcc25b05b224b8e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"db3f557db313e16bcbcc25b05b224b8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:56.806889 kubelet[2233]: I0307 01:03:56.806233 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db3f557db313e16bcbcc25b05b224b8e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"db3f557db313e16bcbcc25b05b224b8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:56.806889 kubelet[2233]: I0307 01:03:56.806320 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db3f557db313e16bcbcc25b05b224b8e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"db3f557db313e16bcbcc25b05b224b8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:56.806889 kubelet[2233]: I0307 01:03:56.806409 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db3f557db313e16bcbcc25b05b224b8e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"db3f557db313e16bcbcc25b05b224b8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:56.807021 kubelet[2233]: E0307 01:03:56.806419 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521?timeout=10s\": dial tcp 10.128.0.69:6443: connect: connection refused" interval="400ms" Mar 7 01:03:56.807021 kubelet[2233]: I0307 01:03:56.806472 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db3f557db313e16bcbcc25b05b224b8e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"db3f557db313e16bcbcc25b05b224b8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:57.008760 kubelet[2233]: I0307 01:03:57.008559 2233 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:57.009426 kubelet[2233]: E0307 01:03:57.009021 2233 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.69:6443/api/v1/nodes\": dial tcp 10.128.0.69:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:57.067716 containerd[1476]: time="2026-03-07T01:03:57.067559752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521,Uid:cf96df604368386847f1351c533fffc4,Namespace:kube-system,Attempt:0,}" Mar 7 01:03:57.081095 containerd[1476]: time="2026-03-07T01:03:57.081033813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521,Uid:3053446e17301c4661e36be49b5b9761,Namespace:kube-system,Attempt:0,}" Mar 7 01:03:57.086638 containerd[1476]: time="2026-03-07T01:03:57.086288137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521,Uid:db3f557db313e16bcbcc25b05b224b8e,Namespace:kube-system,Attempt:0,}" Mar 7 01:03:57.208014 kubelet[2233]: E0307 01:03:57.207906 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521?timeout=10s\": dial tcp 10.128.0.69:6443: connect: connection refused" interval="800ms" Mar 7 01:03:57.420401 kubelet[2233]: I0307 01:03:57.419237 2233 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:57.420401 kubelet[2233]: E0307 01:03:57.420246 2233 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.69:6443/api/v1/nodes\": dial tcp 10.128.0.69:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:57.448162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount776183851.mount: Deactivated successfully. Mar 7 01:03:57.458455 containerd[1476]: time="2026-03-07T01:03:57.458371624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:03:57.459787 containerd[1476]: time="2026-03-07T01:03:57.459705499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312266" Mar 7 01:03:57.461669 containerd[1476]: time="2026-03-07T01:03:57.460781142Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:03:57.462042 containerd[1476]: time="2026-03-07T01:03:57.461982451Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:03:57.463244 containerd[1476]: time="2026-03-07T01:03:57.463175678Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:03:57.465918 containerd[1476]: time="2026-03-07T01:03:57.464837326Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:03:57.465918 containerd[1476]: time="2026-03-07T01:03:57.465594859Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:03:57.469968 containerd[1476]: time="2026-03-07T01:03:57.469916964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:03:57.472112 containerd[1476]: time="2026-03-07T01:03:57.471205095Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 403.543416ms" Mar 7 01:03:57.474642 containerd[1476]: time="2026-03-07T01:03:57.474582186Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 393.461825ms" Mar 7 01:03:57.475648 containerd[1476]: time="2026-03-07T01:03:57.475594437Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 389.198967ms" Mar 7 01:03:57.611515 kubelet[2233]: E0307 01:03:57.611456 2233 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:03:57.629037 kubelet[2233]: E0307 01:03:57.628973 2233 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:03:57.699573 kubelet[2233]: E0307 01:03:57.698961 2233 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:03:57.702948 containerd[1476]: time="2026-03-07T01:03:57.702409466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:03:57.702948 containerd[1476]: time="2026-03-07T01:03:57.702495126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:03:57.702948 containerd[1476]: time="2026-03-07T01:03:57.702543833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:03:57.702948 containerd[1476]: time="2026-03-07T01:03:57.702792068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:03:57.709041 containerd[1476]: time="2026-03-07T01:03:57.708781270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:03:57.711589 containerd[1476]: time="2026-03-07T01:03:57.711216218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:03:57.711589 containerd[1476]: time="2026-03-07T01:03:57.711264861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:03:57.711589 containerd[1476]: time="2026-03-07T01:03:57.711473270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:03:57.715869 containerd[1476]: time="2026-03-07T01:03:57.715496121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:03:57.715869 containerd[1476]: time="2026-03-07T01:03:57.715599361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:03:57.715869 containerd[1476]: time="2026-03-07T01:03:57.715626491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:03:57.715869 containerd[1476]: time="2026-03-07T01:03:57.715751140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:03:57.748561 systemd[1]: Started cri-containerd-0ae3f18ca9e5ba953b1f86071f566b950b3494491fa2783882d3aa9ea8f015db.scope - libcontainer container 0ae3f18ca9e5ba953b1f86071f566b950b3494491fa2783882d3aa9ea8f015db. Mar 7 01:03:57.770576 systemd[1]: Started cri-containerd-42b55866357b293b2ce136b779c2f202af53fdd0984069ca02fcd92f88523641.scope - libcontainer container 42b55866357b293b2ce136b779c2f202af53fdd0984069ca02fcd92f88523641. Mar 7 01:03:57.786695 systemd[1]: Started cri-containerd-4adb70fa7be171bc00623141fdeebd7301c4fc3d03ea3d9d9b85757134cf04f7.scope - libcontainer container 4adb70fa7be171bc00623141fdeebd7301c4fc3d03ea3d9d9b85757134cf04f7. Mar 7 01:03:57.850364 containerd[1476]: time="2026-03-07T01:03:57.848994156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521,Uid:db3f557db313e16bcbcc25b05b224b8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ae3f18ca9e5ba953b1f86071f566b950b3494491fa2783882d3aa9ea8f015db\"" Mar 7 01:03:57.858358 kubelet[2233]: E0307 01:03:57.857538 2233 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862ca" Mar 7 01:03:57.867599 containerd[1476]: time="2026-03-07T01:03:57.867249304Z" level=info msg="CreateContainer within sandbox \"0ae3f18ca9e5ba953b1f86071f566b950b3494491fa2783882d3aa9ea8f015db\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:03:57.893500 containerd[1476]: time="2026-03-07T01:03:57.893436346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521,Uid:cf96df604368386847f1351c533fffc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"42b55866357b293b2ce136b779c2f202af53fdd0984069ca02fcd92f88523641\"" Mar 7 01:03:57.896827 kubelet[2233]: E0307 01:03:57.896305 2233 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39" Mar 7 01:03:57.901841 containerd[1476]: time="2026-03-07T01:03:57.901793178Z" level=info msg="CreateContainer within sandbox \"0ae3f18ca9e5ba953b1f86071f566b950b3494491fa2783882d3aa9ea8f015db\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0254c73fabd0c50f4b869b2440e369a5b0ecd779fd034e1e59796d03bca7c66d\"" Mar 7 01:03:57.902874 containerd[1476]: time="2026-03-07T01:03:57.902416813Z" level=info msg="CreateContainer within sandbox \"42b55866357b293b2ce136b779c2f202af53fdd0984069ca02fcd92f88523641\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:03:57.905413 containerd[1476]: time="2026-03-07T01:03:57.904435847Z" level=info msg="StartContainer for \"0254c73fabd0c50f4b869b2440e369a5b0ecd779fd034e1e59796d03bca7c66d\"" Mar 7 01:03:57.933944 containerd[1476]: time="2026-03-07T01:03:57.933889821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521,Uid:3053446e17301c4661e36be49b5b9761,Namespace:kube-system,Attempt:0,} returns sandbox id \"4adb70fa7be171bc00623141fdeebd7301c4fc3d03ea3d9d9b85757134cf04f7\"" Mar 7 01:03:57.937677 kubelet[2233]: E0307 01:03:57.937633 2233 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39" Mar 7 01:03:57.938862 containerd[1476]: time="2026-03-07T01:03:57.938762323Z" level=info msg="CreateContainer within sandbox \"42b55866357b293b2ce136b779c2f202af53fdd0984069ca02fcd92f88523641\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"28e985e8390b0ce2b049163787f07ecb92088210bb527b1787cce5cb194dd9bc\"" Mar 7 01:03:57.940101 containerd[1476]: time="2026-03-07T01:03:57.940036892Z" level=info msg="StartContainer for \"28e985e8390b0ce2b049163787f07ecb92088210bb527b1787cce5cb194dd9bc\"" Mar 7 01:03:57.943945 containerd[1476]: time="2026-03-07T01:03:57.943899080Z" level=info msg="CreateContainer within sandbox \"4adb70fa7be171bc00623141fdeebd7301c4fc3d03ea3d9d9b85757134cf04f7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:03:57.954614 systemd[1]: Started cri-containerd-0254c73fabd0c50f4b869b2440e369a5b0ecd779fd034e1e59796d03bca7c66d.scope - libcontainer container 0254c73fabd0c50f4b869b2440e369a5b0ecd779fd034e1e59796d03bca7c66d. Mar 7 01:03:57.983282 containerd[1476]: time="2026-03-07T01:03:57.981973391Z" level=info msg="CreateContainer within sandbox \"4adb70fa7be171bc00623141fdeebd7301c4fc3d03ea3d9d9b85757134cf04f7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57e43187850f0895fed19bc80a44ca9476924655982f9b33dfd80efcc65a3e45\"" Mar 7 01:03:57.983977 containerd[1476]: time="2026-03-07T01:03:57.983934994Z" level=info msg="StartContainer for \"57e43187850f0895fed19bc80a44ca9476924655982f9b33dfd80efcc65a3e45\"" Mar 7 01:03:57.996640 systemd[1]: Started cri-containerd-28e985e8390b0ce2b049163787f07ecb92088210bb527b1787cce5cb194dd9bc.scope - libcontainer container 28e985e8390b0ce2b049163787f07ecb92088210bb527b1787cce5cb194dd9bc. Mar 7 01:03:58.009181 kubelet[2233]: E0307 01:03:58.008752 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521?timeout=10s\": dial tcp 10.128.0.69:6443: connect: connection refused" interval="1.6s" Mar 7 01:03:58.043938 kubelet[2233]: E0307 01:03:58.043848 2233 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521&limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:03:58.050624 systemd[1]: Started cri-containerd-57e43187850f0895fed19bc80a44ca9476924655982f9b33dfd80efcc65a3e45.scope - libcontainer container 57e43187850f0895fed19bc80a44ca9476924655982f9b33dfd80efcc65a3e45. Mar 7 01:03:58.088617 containerd[1476]: time="2026-03-07T01:03:58.088530867Z" level=info msg="StartContainer for \"0254c73fabd0c50f4b869b2440e369a5b0ecd779fd034e1e59796d03bca7c66d\" returns successfully" Mar 7 01:03:58.148232 containerd[1476]: time="2026-03-07T01:03:58.147768147Z" level=info msg="StartContainer for \"28e985e8390b0ce2b049163787f07ecb92088210bb527b1787cce5cb194dd9bc\" returns successfully" Mar 7 01:03:58.170765 containerd[1476]: time="2026-03-07T01:03:58.170708990Z" level=info msg="StartContainer for \"57e43187850f0895fed19bc80a44ca9476924655982f9b33dfd80efcc65a3e45\" returns successfully" Mar 7 01:03:58.226142 kubelet[2233]: I0307 01:03:58.225989 2233 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:58.226545 kubelet[2233]: E0307 01:03:58.226475 2233 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.69:6443/api/v1/nodes\": dial tcp 10.128.0.69:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:58.657209 kubelet[2233]: E0307 01:03:58.657164 2233 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" not found" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:58.658005 kubelet[2233]: E0307 01:03:58.657973 2233 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" not found" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:58.658714 kubelet[2233]: E0307 01:03:58.658685 2233 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" not found" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:59.663394 kubelet[2233]: E0307 01:03:59.662684 2233 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" not found" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:59.663394 kubelet[2233]: E0307 01:03:59.663173 2233 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" not found" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:59.688044 kubelet[2233]: E0307 01:03:59.687790 2233 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" not found" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:03:59.834240 kubelet[2233]: I0307 01:03:59.833247 2233 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:02.765879 kubelet[2233]: E0307 01:04:02.765505 2233 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" not found" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:04.218699 kubelet[2233]: I0307 01:04:04.218579 2233 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:04.297081 kubelet[2233]: I0307 01:04:04.296669 2233 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:04.381139 kubelet[2233]: E0307 01:04:04.381074 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Mar 7 01:04:04.383289 kubelet[2233]: E0307 01:04:04.382965 2233 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:04.383289 kubelet[2233]: I0307 01:04:04.383004 2233 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:04.394518 kubelet[2233]: E0307 01:04:04.393020 2233 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:04.394518 kubelet[2233]: I0307 01:04:04.393061 2233 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:04.398998 kubelet[2233]: E0307 01:04:04.398947 2233 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:04.570377 kubelet[2233]: I0307 01:04:04.569581 2233 apiserver.go:52] "Watching apiserver" Mar 7 01:04:04.604295 kubelet[2233]: I0307 01:04:04.604220 2233 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:04:06.275119 systemd[1]: Reloading requested from client PID 2517 ('systemctl') (unit session-9.scope)... Mar 7 01:04:06.275142 systemd[1]: Reloading... Mar 7 01:04:06.434413 zram_generator::config[2562]: No configuration found. Mar 7 01:04:06.584286 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:04:06.724436 systemd[1]: Reloading finished in 448 ms. Mar 7 01:04:06.780644 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:04:06.804438 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:04:06.804793 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:04:06.804886 systemd[1]: kubelet.service: Consumed 1.396s CPU time, 126.0M memory peak, 0B memory swap peak. Mar 7 01:04:06.812827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:04:07.114321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:04:07.128521 (kubelet)[2610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:04:07.202995 kubelet[2610]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:04:07.202995 kubelet[2610]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:04:07.202995 kubelet[2610]: I0307 01:04:07.202042 2610 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:04:07.212066 kubelet[2610]: I0307 01:04:07.212033 2610 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:04:07.212231 kubelet[2610]: I0307 01:04:07.212220 2610 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:04:07.212309 kubelet[2610]: I0307 01:04:07.212301 2610 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:04:07.212422 kubelet[2610]: I0307 01:04:07.212402 2610 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:04:07.212757 kubelet[2610]: I0307 01:04:07.212716 2610 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:04:07.214273 kubelet[2610]: I0307 01:04:07.214221 2610 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:04:07.217357 kubelet[2610]: I0307 01:04:07.217129 2610 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:04:07.221133 kubelet[2610]: E0307 01:04:07.221105 2610 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:04:07.221396 kubelet[2610]: I0307 01:04:07.221374 2610 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:04:07.228395 kubelet[2610]: I0307 01:04:07.226893 2610 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:04:07.228395 kubelet[2610]: I0307 01:04:07.227234 2610 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:04:07.228395 kubelet[2610]: I0307 01:04:07.227289 2610 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:04:07.228395 kubelet[2610]: I0307 01:04:07.227667 2610 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:04:07.228833 kubelet[2610]: I0307 01:04:07.227692 2610 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:04:07.228833 kubelet[2610]: I0307 01:04:07.227734 2610 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:04:07.228833 kubelet[2610]: I0307 01:04:07.228565 2610 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:04:07.228833 kubelet[2610]: I0307 01:04:07.228791 2610 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:04:07.228833 kubelet[2610]: I0307 01:04:07.228812 2610 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:04:07.229086 kubelet[2610]: I0307 01:04:07.228850 2610 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:04:07.229086 kubelet[2610]: I0307 01:04:07.228868 2610 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:04:07.233369 kubelet[2610]: I0307 01:04:07.232118 2610 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:04:07.233369 kubelet[2610]: I0307 01:04:07.232995 2610 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:04:07.233369 kubelet[2610]: I0307 01:04:07.233047 2610 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:04:07.277983 kubelet[2610]: I0307 01:04:07.277911 2610 server.go:1262] "Started kubelet" Mar 7 01:04:07.285405 kubelet[2610]: I0307 01:04:07.281580 2610 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:04:07.285405 kubelet[2610]: I0307 01:04:07.281664 2610 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:04:07.285405 kubelet[2610]: I0307 01:04:07.282054 2610 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:04:07.285405 kubelet[2610]: I0307 01:04:07.282150 2610 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:04:07.285405 kubelet[2610]: I0307 01:04:07.282409 2610 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:04:07.286615 kubelet[2610]: I0307 01:04:07.286167 2610 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:04:07.295638 kubelet[2610]: I0307 01:04:07.295598 2610 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:04:07.296715 kubelet[2610]: I0307 01:04:07.296682 2610 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:04:07.297147 kubelet[2610]: I0307 01:04:07.297080 2610 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:04:07.300300 kubelet[2610]: I0307 01:04:07.292167 2610 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:04:07.309378 kubelet[2610]: I0307 01:04:07.308535 2610 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:04:07.309378 kubelet[2610]: I0307 01:04:07.308719 2610 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:04:07.314900 kubelet[2610]: I0307 01:04:07.314239 2610 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:04:07.322943 kubelet[2610]: I0307 01:04:07.322909 2610 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:04:07.324934 kubelet[2610]: E0307 01:04:07.324893 2610 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:04:07.352543 kubelet[2610]: I0307 01:04:07.352476 2610 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:04:07.353603 kubelet[2610]: I0307 01:04:07.352745 2610 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:04:07.353603 kubelet[2610]: I0307 01:04:07.352784 2610 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:04:07.353603 kubelet[2610]: E0307 01:04:07.352871 2610 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:04:07.411102 kubelet[2610]: I0307 01:04:07.410987 2610 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:04:07.411298 kubelet[2610]: I0307 01:04:07.411277 2610 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:04:07.411469 kubelet[2610]: I0307 01:04:07.411456 2610 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:04:07.411762 kubelet[2610]: I0307 01:04:07.411740 2610 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:04:07.411883 kubelet[2610]: I0307 01:04:07.411855 2610 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:04:07.411963 kubelet[2610]: I0307 01:04:07.411952 2610 policy_none.go:49] "None policy: Start" Mar 7 01:04:07.412047 kubelet[2610]: I0307 01:04:07.412037 2610 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:04:07.412139 kubelet[2610]: I0307 01:04:07.412126 2610 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:04:07.412415 kubelet[2610]: I0307 01:04:07.412397 2610 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 01:04:07.413547 kubelet[2610]: I0307 01:04:07.412507 2610 policy_none.go:47] "Start" Mar 7 01:04:07.423408 kubelet[2610]: E0307 01:04:07.423313 2610 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:04:07.423908 kubelet[2610]: I0307 01:04:07.423882 2610 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:04:07.424068 kubelet[2610]: I0307 01:04:07.424025 2610 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:04:07.425614 kubelet[2610]: I0307 01:04:07.424484 2610 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:04:07.429044 kubelet[2610]: E0307 01:04:07.428113 2610 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:04:07.454795 kubelet[2610]: I0307 01:04:07.454152 2610 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:07.454795 kubelet[2610]: I0307 01:04:07.454708 2610 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:07.455172 kubelet[2610]: I0307 01:04:07.455091 2610 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:07.463252 kubelet[2610]: I0307 01:04:07.463213 2610 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 7 01:04:07.465570 kubelet[2610]: I0307 01:04:07.465512 2610 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 7 01:04:07.466890 kubelet[2610]: I0307 01:04:07.465646 2610 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 7 01:04:07.501450 kubelet[2610]: I0307 01:04:07.500943 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3053446e17301c4661e36be49b5b9761-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"3053446e17301c4661e36be49b5b9761\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:07.501450 kubelet[2610]: I0307 01:04:07.501016 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3053446e17301c4661e36be49b5b9761-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"3053446e17301c4661e36be49b5b9761\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:07.501450 kubelet[2610]: I0307 01:04:07.501052 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db3f557db313e16bcbcc25b05b224b8e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"db3f557db313e16bcbcc25b05b224b8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:07.501450 kubelet[2610]: I0307 01:04:07.501085 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db3f557db313e16bcbcc25b05b224b8e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"db3f557db313e16bcbcc25b05b224b8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:07.501824 kubelet[2610]: I0307 01:04:07.501116 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db3f557db313e16bcbcc25b05b224b8e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"db3f557db313e16bcbcc25b05b224b8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:07.501824 kubelet[2610]: I0307 01:04:07.501145 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3053446e17301c4661e36be49b5b9761-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"3053446e17301c4661e36be49b5b9761\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:07.501824 kubelet[2610]: I0307 01:04:07.501174 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db3f557db313e16bcbcc25b05b224b8e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"db3f557db313e16bcbcc25b05b224b8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:07.501824 kubelet[2610]: I0307 01:04:07.501202 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db3f557db313e16bcbcc25b05b224b8e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"db3f557db313e16bcbcc25b05b224b8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:07.502039 kubelet[2610]: I0307 01:04:07.501231 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf96df604368386847f1351c533fffc4-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" (UID: \"cf96df604368386847f1351c533fffc4\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:07.541136 kubelet[2610]: I0307 01:04:07.541099 2610 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:07.550515 kubelet[2610]: I0307 01:04:07.550064 2610 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:07.550515 kubelet[2610]: I0307 01:04:07.550172 2610 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:08.230483 kubelet[2610]: I0307 01:04:08.230438 2610 apiserver.go:52] "Watching apiserver" Mar 7 01:04:08.298366 kubelet[2610]: I0307 01:04:08.296988 2610 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:04:08.389618 kubelet[2610]: I0307 01:04:08.389573 2610 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:08.399536 kubelet[2610]: I0307 01:04:08.399486 2610 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 7 01:04:08.399752 kubelet[2610]: E0307 01:04:08.399577 2610 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:08.408280 kubelet[2610]: I0307 01:04:08.408150 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" podStartSLOduration=1.408125308 podStartE2EDuration="1.408125308s" podCreationTimestamp="2026-03-07 01:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:04:08.39121311 +0000 UTC m=+1.256778085" watchObservedRunningTime="2026-03-07 01:04:08.408125308 +0000 UTC m=+1.273690278" Mar 7 01:04:08.427234 kubelet[2610]: I0307 01:04:08.427159 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" podStartSLOduration=1.427137012 podStartE2EDuration="1.427137012s" podCreationTimestamp="2026-03-07 01:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:04:08.408483021 +0000 UTC m=+1.274047995" watchObservedRunningTime="2026-03-07 01:04:08.427137012 +0000 UTC m=+1.292701973" Mar 7 01:04:08.471397 kubelet[2610]: I0307 01:04:08.471277 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" podStartSLOduration=1.471250119 podStartE2EDuration="1.471250119s" podCreationTimestamp="2026-03-07 01:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:04:08.428123108 +0000 UTC m=+1.293688082" watchObservedRunningTime="2026-03-07 01:04:08.471250119 +0000 UTC m=+1.336815095" Mar 7 01:04:08.977516 update_engine[1459]: I20260307 01:04:08.977416 1459 update_attempter.cc:509] Updating boot flags... Mar 7 01:04:09.049481 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2667) Mar 7 01:04:09.189369 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2667) Mar 7 01:04:09.319809 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2667) Mar 7 01:04:13.733327 kubelet[2610]: I0307 01:04:13.733281 2610 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:04:13.734279 containerd[1476]: time="2026-03-07T01:04:13.734219768Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:04:13.735052 kubelet[2610]: I0307 01:04:13.735009 2610 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:04:14.794652 systemd[1]: Created slice kubepods-besteffort-pod9ea50a34_ee8f_4173_b5fe_23120b5930ce.slice - libcontainer container kubepods-besteffort-pod9ea50a34_ee8f_4173_b5fe_23120b5930ce.slice. Mar 7 01:04:14.854465 kubelet[2610]: I0307 01:04:14.854397 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9ea50a34-ee8f-4173-b5fe-23120b5930ce-kube-proxy\") pod \"kube-proxy-vvfbc\" (UID: \"9ea50a34-ee8f-4173-b5fe-23120b5930ce\") " pod="kube-system/kube-proxy-vvfbc" Mar 7 01:04:14.854465 kubelet[2610]: I0307 01:04:14.854459 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9twqm\" (UniqueName: \"kubernetes.io/projected/9ea50a34-ee8f-4173-b5fe-23120b5930ce-kube-api-access-9twqm\") pod \"kube-proxy-vvfbc\" (UID: \"9ea50a34-ee8f-4173-b5fe-23120b5930ce\") " pod="kube-system/kube-proxy-vvfbc" Mar 7 01:04:14.855052 kubelet[2610]: I0307 01:04:14.854495 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ea50a34-ee8f-4173-b5fe-23120b5930ce-xtables-lock\") pod \"kube-proxy-vvfbc\" (UID: \"9ea50a34-ee8f-4173-b5fe-23120b5930ce\") " pod="kube-system/kube-proxy-vvfbc" Mar 7 01:04:14.855052 kubelet[2610]: I0307 01:04:14.854517 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ea50a34-ee8f-4173-b5fe-23120b5930ce-lib-modules\") pod \"kube-proxy-vvfbc\" (UID: \"9ea50a34-ee8f-4173-b5fe-23120b5930ce\") " pod="kube-system/kube-proxy-vvfbc" Mar 7 01:04:14.993995 systemd[1]: Created slice kubepods-besteffort-pod1e11675a_6b32_485b_b923_2b97c55cdddd.slice - libcontainer container kubepods-besteffort-pod1e11675a_6b32_485b_b923_2b97c55cdddd.slice. Mar 7 01:04:15.056744 kubelet[2610]: I0307 01:04:15.056550 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k94qh\" (UniqueName: \"kubernetes.io/projected/1e11675a-6b32-485b-b923-2b97c55cdddd-kube-api-access-k94qh\") pod \"tigera-operator-5588576f44-jm7jw\" (UID: \"1e11675a-6b32-485b-b923-2b97c55cdddd\") " pod="tigera-operator/tigera-operator-5588576f44-jm7jw" Mar 7 01:04:15.056744 kubelet[2610]: I0307 01:04:15.056613 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e11675a-6b32-485b-b923-2b97c55cdddd-var-lib-calico\") pod \"tigera-operator-5588576f44-jm7jw\" (UID: \"1e11675a-6b32-485b-b923-2b97c55cdddd\") " pod="tigera-operator/tigera-operator-5588576f44-jm7jw" Mar 7 01:04:15.108504 containerd[1476]: time="2026-03-07T01:04:15.107973436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvfbc,Uid:9ea50a34-ee8f-4173-b5fe-23120b5930ce,Namespace:kube-system,Attempt:0,}" Mar 7 01:04:15.143241 containerd[1476]: time="2026-03-07T01:04:15.142558568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:04:15.143241 containerd[1476]: time="2026-03-07T01:04:15.142654987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:04:15.143241 containerd[1476]: time="2026-03-07T01:04:15.142711952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:15.143241 containerd[1476]: time="2026-03-07T01:04:15.142865450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:15.196569 systemd[1]: Started cri-containerd-94bf7b804f76ff732eb93190453182dada1d2b5d8b2ea6d90d5fd35082dfbf74.scope - libcontainer container 94bf7b804f76ff732eb93190453182dada1d2b5d8b2ea6d90d5fd35082dfbf74. Mar 7 01:04:15.230412 containerd[1476]: time="2026-03-07T01:04:15.230290151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvfbc,Uid:9ea50a34-ee8f-4173-b5fe-23120b5930ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"94bf7b804f76ff732eb93190453182dada1d2b5d8b2ea6d90d5fd35082dfbf74\"" Mar 7 01:04:15.239541 containerd[1476]: time="2026-03-07T01:04:15.239376699Z" level=info msg="CreateContainer within sandbox \"94bf7b804f76ff732eb93190453182dada1d2b5d8b2ea6d90d5fd35082dfbf74\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:04:15.262217 containerd[1476]: time="2026-03-07T01:04:15.262155439Z" level=info msg="CreateContainer within sandbox \"94bf7b804f76ff732eb93190453182dada1d2b5d8b2ea6d90d5fd35082dfbf74\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"897b71cc5a6bc767e80eff3f6c58f758acdaca9919b0dc9b43b0e753a1aa8cc7\"" Mar 7 01:04:15.262954 containerd[1476]: time="2026-03-07T01:04:15.262890786Z" level=info msg="StartContainer for \"897b71cc5a6bc767e80eff3f6c58f758acdaca9919b0dc9b43b0e753a1aa8cc7\"" Mar 7 01:04:15.301586 systemd[1]: Started cri-containerd-897b71cc5a6bc767e80eff3f6c58f758acdaca9919b0dc9b43b0e753a1aa8cc7.scope - libcontainer container 897b71cc5a6bc767e80eff3f6c58f758acdaca9919b0dc9b43b0e753a1aa8cc7. Mar 7 01:04:15.303764 containerd[1476]: time="2026-03-07T01:04:15.303160940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-jm7jw,Uid:1e11675a-6b32-485b-b923-2b97c55cdddd,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:04:15.365374 containerd[1476]: time="2026-03-07T01:04:15.365082929Z" level=info msg="StartContainer for \"897b71cc5a6bc767e80eff3f6c58f758acdaca9919b0dc9b43b0e753a1aa8cc7\" returns successfully" Mar 7 01:04:15.369317 containerd[1476]: time="2026-03-07T01:04:15.368939441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:04:15.369317 containerd[1476]: time="2026-03-07T01:04:15.369010935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:04:15.369317 containerd[1476]: time="2026-03-07T01:04:15.369060747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:15.369317 containerd[1476]: time="2026-03-07T01:04:15.369194858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:15.403633 systemd[1]: Started cri-containerd-d2b96a973538f5fa5a915d8ddb3c186c7f8a78c74e68b1cb8cf942b58040c023.scope - libcontainer container d2b96a973538f5fa5a915d8ddb3c186c7f8a78c74e68b1cb8cf942b58040c023. Mar 7 01:04:15.508213 containerd[1476]: time="2026-03-07T01:04:15.507992275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-jm7jw,Uid:1e11675a-6b32-485b-b923-2b97c55cdddd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d2b96a973538f5fa5a915d8ddb3c186c7f8a78c74e68b1cb8cf942b58040c023\"" Mar 7 01:04:15.516055 containerd[1476]: time="2026-03-07T01:04:15.515989124Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:04:16.677925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003694159.mount: Deactivated successfully. Mar 7 01:04:18.093625 containerd[1476]: time="2026-03-07T01:04:18.093553033Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:18.095063 containerd[1476]: time="2026-03-07T01:04:18.094989516Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:04:18.096384 containerd[1476]: time="2026-03-07T01:04:18.096204903Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:18.099782 containerd[1476]: time="2026-03-07T01:04:18.099710571Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:18.104301 containerd[1476]: time="2026-03-07T01:04:18.104257567Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.588176969s" Mar 7 01:04:18.104301 containerd[1476]: time="2026-03-07T01:04:18.104355190Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:04:18.117081 containerd[1476]: time="2026-03-07T01:04:18.116826940Z" level=info msg="CreateContainer within sandbox \"d2b96a973538f5fa5a915d8ddb3c186c7f8a78c74e68b1cb8cf942b58040c023\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:04:18.142665 containerd[1476]: time="2026-03-07T01:04:18.142586101Z" level=info msg="CreateContainer within sandbox \"d2b96a973538f5fa5a915d8ddb3c186c7f8a78c74e68b1cb8cf942b58040c023\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b1ee617f66b0e32c65590c31cf56950503d0f8cfb0035410abf735ee9a6b8760\"" Mar 7 01:04:18.144582 containerd[1476]: time="2026-03-07T01:04:18.143244500Z" level=info msg="StartContainer for \"b1ee617f66b0e32c65590c31cf56950503d0f8cfb0035410abf735ee9a6b8760\"" Mar 7 01:04:18.192601 systemd[1]: Started cri-containerd-b1ee617f66b0e32c65590c31cf56950503d0f8cfb0035410abf735ee9a6b8760.scope - libcontainer container b1ee617f66b0e32c65590c31cf56950503d0f8cfb0035410abf735ee9a6b8760. Mar 7 01:04:18.229397 containerd[1476]: time="2026-03-07T01:04:18.229310249Z" level=info msg="StartContainer for \"b1ee617f66b0e32c65590c31cf56950503d0f8cfb0035410abf735ee9a6b8760\" returns successfully" Mar 7 01:04:18.443714 kubelet[2610]: I0307 01:04:18.443304 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vvfbc" podStartSLOduration=4.443278264 podStartE2EDuration="4.443278264s" podCreationTimestamp="2026-03-07 01:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:04:15.428527119 +0000 UTC m=+8.294092094" watchObservedRunningTime="2026-03-07 01:04:18.443278264 +0000 UTC m=+11.308843239" Mar 7 01:04:23.421898 sudo[1743]: pam_unix(sudo:session): session closed for user root Mar 7 01:04:23.453836 sshd[1740]: pam_unix(sshd:session): session closed for user core Mar 7 01:04:23.460704 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:04:23.464646 systemd[1]: sshd@8-10.128.0.69:22-68.220.241.50:53766.service: Deactivated successfully. Mar 7 01:04:23.469631 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:04:23.469931 systemd[1]: session-9.scope: Consumed 4.726s CPU time, 162.8M memory peak, 0B memory swap peak. Mar 7 01:04:23.473632 systemd-logind[1454]: Removed session 9. Mar 7 01:04:26.469768 systemd[1]: Started sshd@10-10.128.0.69:22-171.231.176.228:33704.service - OpenSSH per-connection server daemon (171.231.176.228:33704). Mar 7 01:04:27.426256 sshd[3024]: Invalid user admin from 171.231.176.228 port 33704 Mar 7 01:04:27.634357 sshd[3024]: PAM: Permission denied for illegal user admin from 171.231.176.228 Mar 7 01:04:27.635070 sshd[3024]: Failed keyboard-interactive/pam for invalid user admin from 171.231.176.228 port 33704 ssh2 Mar 7 01:04:27.917456 sshd[3024]: Connection closed by invalid user admin 171.231.176.228 port 33704 [preauth] Mar 7 01:04:27.924454 systemd[1]: sshd@10-10.128.0.69:22-171.231.176.228:33704.service: Deactivated successfully. Mar 7 01:04:28.432676 kubelet[2610]: I0307 01:04:28.432588 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-jm7jw" podStartSLOduration=11.838280303 podStartE2EDuration="14.432562073s" podCreationTimestamp="2026-03-07 01:04:14 +0000 UTC" firstStartedPulling="2026-03-07 01:04:15.51391235 +0000 UTC m=+8.379477308" lastFinishedPulling="2026-03-07 01:04:18.108194117 +0000 UTC m=+10.973759078" observedRunningTime="2026-03-07 01:04:18.447196338 +0000 UTC m=+11.312761313" watchObservedRunningTime="2026-03-07 01:04:28.432562073 +0000 UTC m=+21.298127048" Mar 7 01:04:28.445651 kubelet[2610]: I0307 01:04:28.444554 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxfxz\" (UniqueName: \"kubernetes.io/projected/c8e69e3f-98ab-45ec-abcd-2605ceb8eab7-kube-api-access-kxfxz\") pod \"calico-typha-54fc575d9f-q8wd4\" (UID: \"c8e69e3f-98ab-45ec-abcd-2605ceb8eab7\") " pod="calico-system/calico-typha-54fc575d9f-q8wd4" Mar 7 01:04:28.445651 kubelet[2610]: I0307 01:04:28.445472 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c8e69e3f-98ab-45ec-abcd-2605ceb8eab7-typha-certs\") pod \"calico-typha-54fc575d9f-q8wd4\" (UID: \"c8e69e3f-98ab-45ec-abcd-2605ceb8eab7\") " pod="calico-system/calico-typha-54fc575d9f-q8wd4" Mar 7 01:04:28.445651 kubelet[2610]: I0307 01:04:28.445531 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8e69e3f-98ab-45ec-abcd-2605ceb8eab7-tigera-ca-bundle\") pod \"calico-typha-54fc575d9f-q8wd4\" (UID: \"c8e69e3f-98ab-45ec-abcd-2605ceb8eab7\") " pod="calico-system/calico-typha-54fc575d9f-q8wd4" Mar 7 01:04:28.453524 systemd[1]: Created slice kubepods-besteffort-podc8e69e3f_98ab_45ec_abcd_2605ceb8eab7.slice - libcontainer container kubepods-besteffort-podc8e69e3f_98ab_45ec_abcd_2605ceb8eab7.slice. Mar 7 01:04:28.614045 systemd[1]: Created slice kubepods-besteffort-podefa93dcf_0187_4290_8765_79288a26c881.slice - libcontainer container kubepods-besteffort-podefa93dcf_0187_4290_8765_79288a26c881.slice. Mar 7 01:04:28.647005 kubelet[2610]: I0307 01:04:28.646936 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/efa93dcf-0187-4290-8765-79288a26c881-cni-net-dir\") pod \"calico-node-d4bqh\" (UID: \"efa93dcf-0187-4290-8765-79288a26c881\") " pod="calico-system/calico-node-d4bqh" Mar 7 01:04:28.647005 kubelet[2610]: I0307 01:04:28.647001 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/efa93dcf-0187-4290-8765-79288a26c881-cni-log-dir\") pod \"calico-node-d4bqh\" (UID: \"efa93dcf-0187-4290-8765-79288a26c881\") " pod="calico-system/calico-node-d4bqh" Mar 7 01:04:28.647257 kubelet[2610]: I0307 01:04:28.647042 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/efa93dcf-0187-4290-8765-79288a26c881-flexvol-driver-host\") pod \"calico-node-d4bqh\" (UID: \"efa93dcf-0187-4290-8765-79288a26c881\") " pod="calico-system/calico-node-d4bqh" Mar 7 01:04:28.647257 kubelet[2610]: I0307 01:04:28.647066 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efa93dcf-0187-4290-8765-79288a26c881-lib-modules\") pod \"calico-node-d4bqh\" (UID: \"efa93dcf-0187-4290-8765-79288a26c881\") " pod="calico-system/calico-node-d4bqh" Mar 7 01:04:28.647257 kubelet[2610]: I0307 01:04:28.647093 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/efa93dcf-0187-4290-8765-79288a26c881-bpffs\") pod \"calico-node-d4bqh\" (UID: \"efa93dcf-0187-4290-8765-79288a26c881\") " pod="calico-system/calico-node-d4bqh" Mar 7 01:04:28.647257 kubelet[2610]: I0307 01:04:28.647114 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/efa93dcf-0187-4290-8765-79288a26c881-var-run-calico\") pod \"calico-node-d4bqh\" (UID: \"efa93dcf-0187-4290-8765-79288a26c881\") " pod="calico-system/calico-node-d4bqh" Mar 7 01:04:28.647257 kubelet[2610]: I0307 01:04:28.647141 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/efa93dcf-0187-4290-8765-79288a26c881-node-certs\") pod \"calico-node-d4bqh\" (UID: \"efa93dcf-0187-4290-8765-79288a26c881\") " pod="calico-system/calico-node-d4bqh" Mar 7 01:04:28.647675 kubelet[2610]: I0307 01:04:28.647165 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/efa93dcf-0187-4290-8765-79288a26c881-policysync\") pod \"calico-node-d4bqh\" (UID: \"efa93dcf-0187-4290-8765-79288a26c881\") " pod="calico-system/calico-node-d4bqh" Mar 7 01:04:28.647675 kubelet[2610]: I0307 01:04:28.647187 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/efa93dcf-0187-4290-8765-79288a26c881-sys-fs\") pod \"calico-node-d4bqh\" (UID: \"efa93dcf-0187-4290-8765-79288a26c881\") " pod="calico-system/calico-node-d4bqh" Mar 7 01:04:28.647675 kubelet[2610]: I0307 01:04:28.647210 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/efa93dcf-0187-4290-8765-79288a26c881-var-lib-calico\") pod \"calico-node-d4bqh\" (UID: \"efa93dcf-0187-4290-8765-79288a26c881\") " pod="calico-system/calico-node-d4bqh" Mar 7 01:04:28.647675 kubelet[2610]: I0307 01:04:28.647241 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzp2m\" (UniqueName: \"kubernetes.io/projected/efa93dcf-0187-4290-8765-79288a26c881-kube-api-access-lzp2m\") pod \"calico-node-d4bqh\" (UID: \"efa93dcf-0187-4290-8765-79288a26c881\") " pod="calico-system/calico-node-d4bqh" Mar 7 01:04:28.647675 kubelet[2610]: I0307 01:04:28.647266 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efa93dcf-0187-4290-8765-79288a26c881-xtables-lock\") pod \"calico-node-d4bqh\" (UID: \"efa93dcf-0187-4290-8765-79288a26c881\") " pod="calico-system/calico-node-d4bqh" Mar 7 01:04:28.648630 kubelet[2610]: I0307 01:04:28.647296 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efa93dcf-0187-4290-8765-79288a26c881-tigera-ca-bundle\") pod \"calico-node-d4bqh\" (UID: \"efa93dcf-0187-4290-8765-79288a26c881\") " pod="calico-system/calico-node-d4bqh" Mar 7 01:04:28.648630 kubelet[2610]: I0307 01:04:28.647326 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/efa93dcf-0187-4290-8765-79288a26c881-cni-bin-dir\") pod \"calico-node-d4bqh\" (UID: \"efa93dcf-0187-4290-8765-79288a26c881\") " pod="calico-system/calico-node-d4bqh" Mar 7 01:04:28.648630 kubelet[2610]: I0307 01:04:28.647410 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/efa93dcf-0187-4290-8765-79288a26c881-nodeproc\") pod \"calico-node-d4bqh\" (UID: \"efa93dcf-0187-4290-8765-79288a26c881\") " pod="calico-system/calico-node-d4bqh" Mar 7 01:04:28.670365 kubelet[2610]: E0307 01:04:28.670276 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhvgj" podUID="fad7ec34-4cf5-4a59-a390-83631ed6b6c6" Mar 7 01:04:28.749987 kubelet[2610]: I0307 01:04:28.747643 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fad7ec34-4cf5-4a59-a390-83631ed6b6c6-kubelet-dir\") pod \"csi-node-driver-hhvgj\" (UID: \"fad7ec34-4cf5-4a59-a390-83631ed6b6c6\") " pod="calico-system/csi-node-driver-hhvgj" Mar 7 01:04:28.749987 kubelet[2610]: I0307 01:04:28.747706 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fad7ec34-4cf5-4a59-a390-83631ed6b6c6-varrun\") pod \"csi-node-driver-hhvgj\" (UID: \"fad7ec34-4cf5-4a59-a390-83631ed6b6c6\") " pod="calico-system/csi-node-driver-hhvgj" Mar 7 01:04:28.749987 kubelet[2610]: I0307 01:04:28.747779 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fad7ec34-4cf5-4a59-a390-83631ed6b6c6-socket-dir\") pod \"csi-node-driver-hhvgj\" (UID: \"fad7ec34-4cf5-4a59-a390-83631ed6b6c6\") " pod="calico-system/csi-node-driver-hhvgj" Mar 7 01:04:28.750303 kubelet[2610]: I0307 01:04:28.750123 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fad7ec34-4cf5-4a59-a390-83631ed6b6c6-registration-dir\") pod \"csi-node-driver-hhvgj\" (UID: \"fad7ec34-4cf5-4a59-a390-83631ed6b6c6\") " pod="calico-system/csi-node-driver-hhvgj" Mar 7 01:04:28.750404 kubelet[2610]: I0307 01:04:28.750258 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltfgg\" (UniqueName: \"kubernetes.io/projected/fad7ec34-4cf5-4a59-a390-83631ed6b6c6-kube-api-access-ltfgg\") pod \"csi-node-driver-hhvgj\" (UID: \"fad7ec34-4cf5-4a59-a390-83631ed6b6c6\") " pod="calico-system/csi-node-driver-hhvgj" Mar 7 01:04:28.758361 kubelet[2610]: E0307 01:04:28.755181 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.758361 kubelet[2610]: W0307 01:04:28.755216 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.758361 kubelet[2610]: E0307 01:04:28.755269 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.758361 kubelet[2610]: E0307 01:04:28.756798 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.758361 kubelet[2610]: W0307 01:04:28.756818 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.758361 kubelet[2610]: E0307 01:04:28.756864 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.758361 kubelet[2610]: E0307 01:04:28.757418 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.758361 kubelet[2610]: W0307 01:04:28.757445 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.758361 kubelet[2610]: E0307 01:04:28.757468 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.758361 kubelet[2610]: E0307 01:04:28.758224 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.758959 kubelet[2610]: W0307 01:04:28.758252 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.758959 kubelet[2610]: E0307 01:04:28.758274 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.759362 kubelet[2610]: E0307 01:04:28.759164 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.759362 kubelet[2610]: W0307 01:04:28.759293 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.759362 kubelet[2610]: E0307 01:04:28.759317 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.762982 kubelet[2610]: E0307 01:04:28.762073 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.763117 kubelet[2610]: W0307 01:04:28.762985 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.763117 kubelet[2610]: E0307 01:04:28.763009 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.764188 kubelet[2610]: E0307 01:04:28.764104 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.764188 kubelet[2610]: W0307 01:04:28.764133 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.764188 kubelet[2610]: E0307 01:04:28.764154 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.764950 kubelet[2610]: E0307 01:04:28.764498 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.764950 kubelet[2610]: W0307 01:04:28.764517 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.764950 kubelet[2610]: E0307 01:04:28.764538 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.783531 kubelet[2610]: E0307 01:04:28.783486 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.783764 kubelet[2610]: W0307 01:04:28.783734 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.783932 kubelet[2610]: E0307 01:04:28.783912 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.795951 containerd[1476]: time="2026-03-07T01:04:28.795883550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54fc575d9f-q8wd4,Uid:c8e69e3f-98ab-45ec-abcd-2605ceb8eab7,Namespace:calico-system,Attempt:0,}" Mar 7 01:04:28.800439 kubelet[2610]: E0307 01:04:28.800409 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.800439 kubelet[2610]: W0307 01:04:28.800434 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.800439 kubelet[2610]: E0307 01:04:28.800461 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.832817 containerd[1476]: time="2026-03-07T01:04:28.832125219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:04:28.832817 containerd[1476]: time="2026-03-07T01:04:28.832561834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:04:28.832817 containerd[1476]: time="2026-03-07T01:04:28.832601496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:28.832817 containerd[1476]: time="2026-03-07T01:04:28.832713784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:28.852422 kubelet[2610]: E0307 01:04:28.852376 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.853057 kubelet[2610]: W0307 01:04:28.852528 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.853057 kubelet[2610]: E0307 01:04:28.852567 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.857183 kubelet[2610]: E0307 01:04:28.855997 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.857183 kubelet[2610]: W0307 01:04:28.856021 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.857183 kubelet[2610]: E0307 01:04:28.856049 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.857183 kubelet[2610]: E0307 01:04:28.856458 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.857183 kubelet[2610]: W0307 01:04:28.856477 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.857183 kubelet[2610]: E0307 01:04:28.856500 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.857183 kubelet[2610]: E0307 01:04:28.856799 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.857183 kubelet[2610]: W0307 01:04:28.856815 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.857183 kubelet[2610]: E0307 01:04:28.856832 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.858200 kubelet[2610]: E0307 01:04:28.857228 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.858200 kubelet[2610]: W0307 01:04:28.857242 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.858200 kubelet[2610]: E0307 01:04:28.857259 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.858200 kubelet[2610]: E0307 01:04:28.857597 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.858200 kubelet[2610]: W0307 01:04:28.857611 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.858200 kubelet[2610]: E0307 01:04:28.857634 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.858200 kubelet[2610]: E0307 01:04:28.857962 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.858200 kubelet[2610]: W0307 01:04:28.857976 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.858200 kubelet[2610]: E0307 01:04:28.857992 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.858675 kubelet[2610]: E0307 01:04:28.858320 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.858675 kubelet[2610]: W0307 01:04:28.858359 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.858675 kubelet[2610]: E0307 01:04:28.858379 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.858844 kubelet[2610]: E0307 01:04:28.858686 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.858844 kubelet[2610]: W0307 01:04:28.858700 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.858844 kubelet[2610]: E0307 01:04:28.858714 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.859778 kubelet[2610]: E0307 01:04:28.859048 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.859778 kubelet[2610]: W0307 01:04:28.859067 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.859778 kubelet[2610]: E0307 01:04:28.859094 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.859778 kubelet[2610]: E0307 01:04:28.859390 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.859778 kubelet[2610]: W0307 01:04:28.859405 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.859778 kubelet[2610]: E0307 01:04:28.859426 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.859778 kubelet[2610]: E0307 01:04:28.859766 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.859778 kubelet[2610]: W0307 01:04:28.859780 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.861468 kubelet[2610]: E0307 01:04:28.859796 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.861468 kubelet[2610]: E0307 01:04:28.860832 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.861468 kubelet[2610]: W0307 01:04:28.860848 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.861468 kubelet[2610]: E0307 01:04:28.860866 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.860588 systemd[1]: Started cri-containerd-8673b4630d8d85ce2191a2432bd69db0846be401a92ed976195e0fcade0d1d3b.scope - libcontainer container 8673b4630d8d85ce2191a2432bd69db0846be401a92ed976195e0fcade0d1d3b. Mar 7 01:04:28.863573 kubelet[2610]: E0307 01:04:28.863542 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.863573 kubelet[2610]: W0307 01:04:28.863566 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.863710 kubelet[2610]: E0307 01:04:28.863583 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.864053 kubelet[2610]: E0307 01:04:28.863869 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.864053 kubelet[2610]: W0307 01:04:28.863886 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.864053 kubelet[2610]: E0307 01:04:28.863902 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.864496 kubelet[2610]: E0307 01:04:28.864321 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.864496 kubelet[2610]: W0307 01:04:28.864373 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.864496 kubelet[2610]: E0307 01:04:28.864392 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.864998 kubelet[2610]: E0307 01:04:28.864968 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.864998 kubelet[2610]: W0307 01:04:28.864985 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.864998 kubelet[2610]: E0307 01:04:28.865003 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.865793 kubelet[2610]: E0307 01:04:28.865756 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.865793 kubelet[2610]: W0307 01:04:28.865787 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.866478 kubelet[2610]: E0307 01:04:28.865805 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.866478 kubelet[2610]: E0307 01:04:28.866258 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.866478 kubelet[2610]: W0307 01:04:28.866273 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.866478 kubelet[2610]: E0307 01:04:28.866289 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.866701 kubelet[2610]: E0307 01:04:28.866643 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.866701 kubelet[2610]: W0307 01:04:28.866656 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.866701 kubelet[2610]: E0307 01:04:28.866673 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.868251 kubelet[2610]: E0307 01:04:28.867026 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.868251 kubelet[2610]: W0307 01:04:28.867043 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.868251 kubelet[2610]: E0307 01:04:28.867062 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.868251 kubelet[2610]: E0307 01:04:28.867521 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.868251 kubelet[2610]: W0307 01:04:28.867537 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.868251 kubelet[2610]: E0307 01:04:28.867555 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.868251 kubelet[2610]: E0307 01:04:28.868050 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.868251 kubelet[2610]: W0307 01:04:28.868069 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.868251 kubelet[2610]: E0307 01:04:28.868102 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.869726 kubelet[2610]: E0307 01:04:28.868454 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.869726 kubelet[2610]: W0307 01:04:28.868468 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.869726 kubelet[2610]: E0307 01:04:28.868483 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.869726 kubelet[2610]: E0307 01:04:28.868830 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.869726 kubelet[2610]: W0307 01:04:28.868844 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.869726 kubelet[2610]: E0307 01:04:28.868859 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.887591 kubelet[2610]: E0307 01:04:28.887529 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:28.887805 kubelet[2610]: W0307 01:04:28.887726 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:28.887805 kubelet[2610]: E0307 01:04:28.887761 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:28.928301 containerd[1476]: time="2026-03-07T01:04:28.928231632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d4bqh,Uid:efa93dcf-0187-4290-8765-79288a26c881,Namespace:calico-system,Attempt:0,}" Mar 7 01:04:28.937471 containerd[1476]: time="2026-03-07T01:04:28.937310685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54fc575d9f-q8wd4,Uid:c8e69e3f-98ab-45ec-abcd-2605ceb8eab7,Namespace:calico-system,Attempt:0,} returns sandbox id \"8673b4630d8d85ce2191a2432bd69db0846be401a92ed976195e0fcade0d1d3b\"" Mar 7 01:04:28.940662 containerd[1476]: time="2026-03-07T01:04:28.940459425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:04:28.966187 containerd[1476]: time="2026-03-07T01:04:28.966040929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:04:28.966187 containerd[1476]: time="2026-03-07T01:04:28.966108803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:04:28.966187 containerd[1476]: time="2026-03-07T01:04:28.966126501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:28.966518 containerd[1476]: time="2026-03-07T01:04:28.966240940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:28.991563 systemd[1]: Started cri-containerd-5f7336df0787aca2d573c0c2011a7d5d93d694269887f6afd05c86f49b1cbf3e.scope - libcontainer container 5f7336df0787aca2d573c0c2011a7d5d93d694269887f6afd05c86f49b1cbf3e. Mar 7 01:04:29.030888 containerd[1476]: time="2026-03-07T01:04:29.030660609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d4bqh,Uid:efa93dcf-0187-4290-8765-79288a26c881,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f7336df0787aca2d573c0c2011a7d5d93d694269887f6afd05c86f49b1cbf3e\"" Mar 7 01:04:30.073214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount607152029.mount: Deactivated successfully. Mar 7 01:04:30.354746 kubelet[2610]: E0307 01:04:30.353305 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhvgj" podUID="fad7ec34-4cf5-4a59-a390-83631ed6b6c6" Mar 7 01:04:31.018062 containerd[1476]: time="2026-03-07T01:04:31.017986558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:31.019375 containerd[1476]: time="2026-03-07T01:04:31.019284349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 01:04:31.020724 containerd[1476]: time="2026-03-07T01:04:31.020657525Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:31.023455 containerd[1476]: time="2026-03-07T01:04:31.023419899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:31.025242 containerd[1476]: time="2026-03-07T01:04:31.024487446Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.083981554s" Mar 7 01:04:31.025242 containerd[1476]: time="2026-03-07T01:04:31.024534078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:04:31.026272 containerd[1476]: time="2026-03-07T01:04:31.026237919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:04:31.047935 containerd[1476]: time="2026-03-07T01:04:31.047690892Z" level=info msg="CreateContainer within sandbox \"8673b4630d8d85ce2191a2432bd69db0846be401a92ed976195e0fcade0d1d3b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:04:31.064537 containerd[1476]: time="2026-03-07T01:04:31.064473751Z" level=info msg="CreateContainer within sandbox \"8673b4630d8d85ce2191a2432bd69db0846be401a92ed976195e0fcade0d1d3b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e9f0432cf474287aa82bde904d77b076e76547b1c2e9b632309f1291342ebae4\"" Mar 7 01:04:31.065508 containerd[1476]: time="2026-03-07T01:04:31.065455901Z" level=info msg="StartContainer for \"e9f0432cf474287aa82bde904d77b076e76547b1c2e9b632309f1291342ebae4\"" Mar 7 01:04:31.117596 systemd[1]: Started cri-containerd-e9f0432cf474287aa82bde904d77b076e76547b1c2e9b632309f1291342ebae4.scope - libcontainer container e9f0432cf474287aa82bde904d77b076e76547b1c2e9b632309f1291342ebae4. Mar 7 01:04:31.176694 containerd[1476]: time="2026-03-07T01:04:31.176623485Z" level=info msg="StartContainer for \"e9f0432cf474287aa82bde904d77b076e76547b1c2e9b632309f1291342ebae4\" returns successfully" Mar 7 01:04:31.563328 kubelet[2610]: E0307 01:04:31.563196 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.563328 kubelet[2610]: W0307 01:04:31.563256 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.563328 kubelet[2610]: E0307 01:04:31.563286 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.565938 kubelet[2610]: E0307 01:04:31.565698 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.565938 kubelet[2610]: W0307 01:04:31.565747 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.565938 kubelet[2610]: E0307 01:04:31.565775 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.566562 kubelet[2610]: E0307 01:04:31.566135 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.566562 kubelet[2610]: W0307 01:04:31.566150 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.566562 kubelet[2610]: E0307 01:04:31.566170 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.567327 kubelet[2610]: E0307 01:04:31.567083 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.567327 kubelet[2610]: W0307 01:04:31.567122 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.567327 kubelet[2610]: E0307 01:04:31.567144 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.567929 kubelet[2610]: E0307 01:04:31.567751 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.567929 kubelet[2610]: W0307 01:04:31.567772 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.567929 kubelet[2610]: E0307 01:04:31.567806 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.570745 kubelet[2610]: E0307 01:04:31.570589 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.570745 kubelet[2610]: W0307 01:04:31.570607 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.570745 kubelet[2610]: E0307 01:04:31.570624 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.571219 kubelet[2610]: E0307 01:04:31.571100 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.571219 kubelet[2610]: W0307 01:04:31.571118 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.571219 kubelet[2610]: E0307 01:04:31.571135 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.571919 kubelet[2610]: E0307 01:04:31.571775 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.571919 kubelet[2610]: W0307 01:04:31.571793 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.571919 kubelet[2610]: E0307 01:04:31.571810 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.572661 kubelet[2610]: E0307 01:04:31.572531 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.572661 kubelet[2610]: W0307 01:04:31.572548 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.572661 kubelet[2610]: E0307 01:04:31.572565 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.573406 kubelet[2610]: E0307 01:04:31.573220 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.573406 kubelet[2610]: W0307 01:04:31.573243 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.573406 kubelet[2610]: E0307 01:04:31.573262 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.573980 kubelet[2610]: E0307 01:04:31.573878 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.573980 kubelet[2610]: W0307 01:04:31.573896 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.573980 kubelet[2610]: E0307 01:04:31.573912 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.574657 kubelet[2610]: E0307 01:04:31.574497 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.574657 kubelet[2610]: W0307 01:04:31.574513 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.574657 kubelet[2610]: E0307 01:04:31.574529 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.575221 kubelet[2610]: E0307 01:04:31.575066 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.575221 kubelet[2610]: W0307 01:04:31.575083 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.575221 kubelet[2610]: E0307 01:04:31.575099 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.576746 kubelet[2610]: E0307 01:04:31.576587 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.576746 kubelet[2610]: W0307 01:04:31.576605 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.576746 kubelet[2610]: E0307 01:04:31.576622 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.577228 kubelet[2610]: E0307 01:04:31.577172 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.577228 kubelet[2610]: W0307 01:04:31.577191 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.577228 kubelet[2610]: E0307 01:04:31.577206 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.578075 kubelet[2610]: E0307 01:04:31.577947 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.578075 kubelet[2610]: W0307 01:04:31.577965 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.578075 kubelet[2610]: E0307 01:04:31.577981 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.579661 kubelet[2610]: E0307 01:04:31.579596 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.579661 kubelet[2610]: W0307 01:04:31.579614 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.579661 kubelet[2610]: E0307 01:04:31.579640 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.580465 kubelet[2610]: E0307 01:04:31.580235 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.580465 kubelet[2610]: W0307 01:04:31.580252 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.580465 kubelet[2610]: E0307 01:04:31.580267 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.581236 kubelet[2610]: E0307 01:04:31.580930 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.581236 kubelet[2610]: W0307 01:04:31.580947 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.581236 kubelet[2610]: E0307 01:04:31.580963 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.581648 kubelet[2610]: E0307 01:04:31.581522 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.581648 kubelet[2610]: W0307 01:04:31.581539 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.581648 kubelet[2610]: E0307 01:04:31.581554 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.582453 kubelet[2610]: E0307 01:04:31.582128 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.582453 kubelet[2610]: W0307 01:04:31.582144 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.582453 kubelet[2610]: E0307 01:04:31.582160 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.582870 kubelet[2610]: E0307 01:04:31.582723 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.582870 kubelet[2610]: W0307 01:04:31.582741 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.582870 kubelet[2610]: E0307 01:04:31.582758 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.583503 kubelet[2610]: E0307 01:04:31.583293 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.583503 kubelet[2610]: W0307 01:04:31.583310 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.583503 kubelet[2610]: E0307 01:04:31.583327 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.584214 kubelet[2610]: E0307 01:04:31.583914 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.584214 kubelet[2610]: W0307 01:04:31.583931 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.584214 kubelet[2610]: E0307 01:04:31.583948 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.584739 kubelet[2610]: E0307 01:04:31.584544 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.584739 kubelet[2610]: W0307 01:04:31.584563 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.584739 kubelet[2610]: E0307 01:04:31.584579 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.585383 kubelet[2610]: E0307 01:04:31.585140 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.585383 kubelet[2610]: W0307 01:04:31.585157 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.585383 kubelet[2610]: E0307 01:04:31.585192 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.586306 kubelet[2610]: E0307 01:04:31.585901 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.586306 kubelet[2610]: W0307 01:04:31.585922 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.586306 kubelet[2610]: E0307 01:04:31.585939 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.586781 kubelet[2610]: E0307 01:04:31.586762 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.587056 kubelet[2610]: W0307 01:04:31.586882 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.587056 kubelet[2610]: E0307 01:04:31.586908 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.587581 kubelet[2610]: E0307 01:04:31.587453 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.587581 kubelet[2610]: W0307 01:04:31.587472 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.587581 kubelet[2610]: E0307 01:04:31.587491 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.588602 kubelet[2610]: E0307 01:04:31.588402 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.588602 kubelet[2610]: W0307 01:04:31.588420 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.588602 kubelet[2610]: E0307 01:04:31.588438 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.589386 kubelet[2610]: E0307 01:04:31.589058 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.589386 kubelet[2610]: W0307 01:04:31.589076 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.589386 kubelet[2610]: E0307 01:04:31.589093 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.590409 kubelet[2610]: E0307 01:04:31.590185 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.590409 kubelet[2610]: W0307 01:04:31.590203 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.590409 kubelet[2610]: E0307 01:04:31.590219 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:31.590949 kubelet[2610]: E0307 01:04:31.590878 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:04:31.590949 kubelet[2610]: W0307 01:04:31.590896 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:04:31.590949 kubelet[2610]: E0307 01:04:31.590912 2610 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:04:32.085625 containerd[1476]: time="2026-03-07T01:04:32.085561071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:32.088039 containerd[1476]: time="2026-03-07T01:04:32.087739149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 01:04:32.089572 containerd[1476]: time="2026-03-07T01:04:32.089515807Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:32.092473 containerd[1476]: time="2026-03-07T01:04:32.092396734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:32.093998 containerd[1476]: time="2026-03-07T01:04:32.093399191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.067114206s" Mar 7 01:04:32.093998 containerd[1476]: time="2026-03-07T01:04:32.093452710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:04:32.099856 containerd[1476]: time="2026-03-07T01:04:32.099773555Z" level=info msg="CreateContainer within sandbox \"5f7336df0787aca2d573c0c2011a7d5d93d694269887f6afd05c86f49b1cbf3e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:04:32.120222 containerd[1476]: time="2026-03-07T01:04:32.120155428Z" level=info msg="CreateContainer within sandbox \"5f7336df0787aca2d573c0c2011a7d5d93d694269887f6afd05c86f49b1cbf3e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"732db66120b90329caf777560dc3088ea5a0b3440abe2d1dc0b5c092261141dc\"" Mar 7 01:04:32.121175 containerd[1476]: time="2026-03-07T01:04:32.121054814Z" level=info msg="StartContainer for \"732db66120b90329caf777560dc3088ea5a0b3440abe2d1dc0b5c092261141dc\"" Mar 7 01:04:32.168614 systemd[1]: Started cri-containerd-732db66120b90329caf777560dc3088ea5a0b3440abe2d1dc0b5c092261141dc.scope - libcontainer container 732db66120b90329caf777560dc3088ea5a0b3440abe2d1dc0b5c092261141dc. Mar 7 01:04:32.212219 containerd[1476]: time="2026-03-07T01:04:32.212157322Z" level=info msg="StartContainer for \"732db66120b90329caf777560dc3088ea5a0b3440abe2d1dc0b5c092261141dc\" returns successfully" Mar 7 01:04:32.229859 systemd[1]: cri-containerd-732db66120b90329caf777560dc3088ea5a0b3440abe2d1dc0b5c092261141dc.scope: Deactivated successfully. Mar 7 01:04:32.266753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-732db66120b90329caf777560dc3088ea5a0b3440abe2d1dc0b5c092261141dc-rootfs.mount: Deactivated successfully. Mar 7 01:04:32.354159 kubelet[2610]: E0307 01:04:32.354104 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhvgj" podUID="fad7ec34-4cf5-4a59-a390-83631ed6b6c6" Mar 7 01:04:32.479579 kubelet[2610]: I0307 01:04:32.479529 2610 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:04:32.504536 kubelet[2610]: I0307 01:04:32.504415 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54fc575d9f-q8wd4" podStartSLOduration=2.418219191 podStartE2EDuration="4.504387241s" podCreationTimestamp="2026-03-07 01:04:28 +0000 UTC" firstStartedPulling="2026-03-07 01:04:28.939857203 +0000 UTC m=+21.805422168" lastFinishedPulling="2026-03-07 01:04:31.026025247 +0000 UTC m=+23.891590218" observedRunningTime="2026-03-07 01:04:31.550556303 +0000 UTC m=+24.416121279" watchObservedRunningTime="2026-03-07 01:04:32.504387241 +0000 UTC m=+25.369952217" Mar 7 01:04:33.161168 containerd[1476]: time="2026-03-07T01:04:33.160862034Z" level=info msg="shim disconnected" id=732db66120b90329caf777560dc3088ea5a0b3440abe2d1dc0b5c092261141dc namespace=k8s.io Mar 7 01:04:33.161168 containerd[1476]: time="2026-03-07T01:04:33.160939701Z" level=warning msg="cleaning up after shim disconnected" id=732db66120b90329caf777560dc3088ea5a0b3440abe2d1dc0b5c092261141dc namespace=k8s.io Mar 7 01:04:33.161168 containerd[1476]: time="2026-03-07T01:04:33.160954166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:04:33.485382 containerd[1476]: time="2026-03-07T01:04:33.485211542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:04:34.353812 kubelet[2610]: E0307 01:04:34.353738 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhvgj" podUID="fad7ec34-4cf5-4a59-a390-83631ed6b6c6" Mar 7 01:04:36.353688 kubelet[2610]: E0307 01:04:36.353424 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhvgj" podUID="fad7ec34-4cf5-4a59-a390-83631ed6b6c6" Mar 7 01:04:38.354071 kubelet[2610]: E0307 01:04:38.353900 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhvgj" podUID="fad7ec34-4cf5-4a59-a390-83631ed6b6c6" Mar 7 01:04:40.355459 kubelet[2610]: E0307 01:04:40.354017 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhvgj" podUID="fad7ec34-4cf5-4a59-a390-83631ed6b6c6" Mar 7 01:04:40.372658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2387511558.mount: Deactivated successfully. Mar 7 01:04:40.409863 containerd[1476]: time="2026-03-07T01:04:40.409791593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:40.411419 containerd[1476]: time="2026-03-07T01:04:40.411227974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:04:40.412808 containerd[1476]: time="2026-03-07T01:04:40.412739529Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:40.415731 containerd[1476]: time="2026-03-07T01:04:40.415676306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:40.416871 containerd[1476]: time="2026-03-07T01:04:40.416639972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.931354284s" Mar 7 01:04:40.416871 containerd[1476]: time="2026-03-07T01:04:40.416689528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:04:40.422273 containerd[1476]: time="2026-03-07T01:04:40.422226964Z" level=info msg="CreateContainer within sandbox \"5f7336df0787aca2d573c0c2011a7d5d93d694269887f6afd05c86f49b1cbf3e\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:04:40.449110 containerd[1476]: time="2026-03-07T01:04:40.448650000Z" level=info msg="CreateContainer within sandbox \"5f7336df0787aca2d573c0c2011a7d5d93d694269887f6afd05c86f49b1cbf3e\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"288c5cea07107909ef800a9b66244fe168bf35fb0e6498729258f1edc16fb439\"" Mar 7 01:04:40.450072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355460701.mount: Deactivated successfully. Mar 7 01:04:40.450832 containerd[1476]: time="2026-03-07T01:04:40.450584184Z" level=info msg="StartContainer for \"288c5cea07107909ef800a9b66244fe168bf35fb0e6498729258f1edc16fb439\"" Mar 7 01:04:40.506212 systemd[1]: Started cri-containerd-288c5cea07107909ef800a9b66244fe168bf35fb0e6498729258f1edc16fb439.scope - libcontainer container 288c5cea07107909ef800a9b66244fe168bf35fb0e6498729258f1edc16fb439. Mar 7 01:04:40.548292 containerd[1476]: time="2026-03-07T01:04:40.548242150Z" level=info msg="StartContainer for \"288c5cea07107909ef800a9b66244fe168bf35fb0e6498729258f1edc16fb439\" returns successfully" Mar 7 01:04:40.603988 systemd[1]: cri-containerd-288c5cea07107909ef800a9b66244fe168bf35fb0e6498729258f1edc16fb439.scope: Deactivated successfully. Mar 7 01:04:41.089042 systemd[1]: Started sshd@11-10.128.0.69:22-185.156.73.233:58732.service - OpenSSH per-connection server daemon (185.156.73.233:58732). Mar 7 01:04:41.372210 systemd[1]: run-containerd-runc-k8s.io-288c5cea07107909ef800a9b66244fe168bf35fb0e6498729258f1edc16fb439-runc.MXbuPk.mount: Deactivated successfully. Mar 7 01:04:41.372381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-288c5cea07107909ef800a9b66244fe168bf35fb0e6498729258f1edc16fb439-rootfs.mount: Deactivated successfully. Mar 7 01:04:42.224907 containerd[1476]: time="2026-03-07T01:04:42.224590666Z" level=info msg="shim disconnected" id=288c5cea07107909ef800a9b66244fe168bf35fb0e6498729258f1edc16fb439 namespace=k8s.io Mar 7 01:04:42.224907 containerd[1476]: time="2026-03-07T01:04:42.224668927Z" level=warning msg="cleaning up after shim disconnected" id=288c5cea07107909ef800a9b66244fe168bf35fb0e6498729258f1edc16fb439 namespace=k8s.io Mar 7 01:04:42.224907 containerd[1476]: time="2026-03-07T01:04:42.224685138Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:04:42.353783 kubelet[2610]: E0307 01:04:42.353696 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhvgj" podUID="fad7ec34-4cf5-4a59-a390-83631ed6b6c6" Mar 7 01:04:42.396410 sshd[3386]: Invalid user admin from 185.156.73.233 port 58732 Mar 7 01:04:42.520290 containerd[1476]: time="2026-03-07T01:04:42.519469677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:04:42.591743 sshd[3386]: Connection closed by invalid user admin 185.156.73.233 port 58732 [preauth] Mar 7 01:04:42.592851 systemd[1]: sshd@11-10.128.0.69:22-185.156.73.233:58732.service: Deactivated successfully. Mar 7 01:04:42.875228 kubelet[2610]: I0307 01:04:42.874756 2610 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:04:44.353450 kubelet[2610]: E0307 01:04:44.353369 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhvgj" podUID="fad7ec34-4cf5-4a59-a390-83631ed6b6c6" Mar 7 01:04:46.354084 kubelet[2610]: E0307 01:04:46.354024 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhvgj" podUID="fad7ec34-4cf5-4a59-a390-83631ed6b6c6" Mar 7 01:04:46.436228 containerd[1476]: time="2026-03-07T01:04:46.436170713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:46.437580 containerd[1476]: time="2026-03-07T01:04:46.437525882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:04:46.439569 containerd[1476]: time="2026-03-07T01:04:46.438423207Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:46.443539 containerd[1476]: time="2026-03-07T01:04:46.443425423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:46.445195 containerd[1476]: time="2026-03-07T01:04:46.444630795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.925107892s" Mar 7 01:04:46.445195 containerd[1476]: time="2026-03-07T01:04:46.444678977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:04:46.451510 containerd[1476]: time="2026-03-07T01:04:46.451418735Z" level=info msg="CreateContainer within sandbox \"5f7336df0787aca2d573c0c2011a7d5d93d694269887f6afd05c86f49b1cbf3e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:04:46.471867 containerd[1476]: time="2026-03-07T01:04:46.471804232Z" level=info msg="CreateContainer within sandbox \"5f7336df0787aca2d573c0c2011a7d5d93d694269887f6afd05c86f49b1cbf3e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"927a65630701fab6ebbf81df0a0aea6016052d4c9f1348ab3ec53ef05d3c7341\"" Mar 7 01:04:46.472618 containerd[1476]: time="2026-03-07T01:04:46.472468125Z" level=info msg="StartContainer for \"927a65630701fab6ebbf81df0a0aea6016052d4c9f1348ab3ec53ef05d3c7341\"" Mar 7 01:04:46.523794 systemd[1]: Started cri-containerd-927a65630701fab6ebbf81df0a0aea6016052d4c9f1348ab3ec53ef05d3c7341.scope - libcontainer container 927a65630701fab6ebbf81df0a0aea6016052d4c9f1348ab3ec53ef05d3c7341. Mar 7 01:04:46.572858 containerd[1476]: time="2026-03-07T01:04:46.572800981Z" level=info msg="StartContainer for \"927a65630701fab6ebbf81df0a0aea6016052d4c9f1348ab3ec53ef05d3c7341\" returns successfully" Mar 7 01:04:47.645354 containerd[1476]: time="2026-03-07T01:04:47.645278535Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:04:47.648573 systemd[1]: cri-containerd-927a65630701fab6ebbf81df0a0aea6016052d4c9f1348ab3ec53ef05d3c7341.scope: Deactivated successfully. Mar 7 01:04:47.679302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-927a65630701fab6ebbf81df0a0aea6016052d4c9f1348ab3ec53ef05d3c7341-rootfs.mount: Deactivated successfully. Mar 7 01:04:47.730147 kubelet[2610]: I0307 01:04:47.727718 2610 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 7 01:04:47.917727 systemd[1]: Created slice kubepods-besteffort-pod0dcea30e_abc3_43fe_b161_0a975c6561d9.slice - libcontainer container kubepods-besteffort-pod0dcea30e_abc3_43fe_b161_0a975c6561d9.slice. Mar 7 01:04:47.952041 systemd[1]: Created slice kubepods-besteffort-podecf71cdc_e20e_4eea_a978_1c6b126bf599.slice - libcontainer container kubepods-besteffort-podecf71cdc_e20e_4eea_a978_1c6b126bf599.slice. Mar 7 01:04:48.093485 kubelet[2610]: I0307 01:04:48.004118 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ecf71cdc-e20e-4eea-a978-1c6b126bf599-whisker-backend-key-pair\") pod \"whisker-678dd9665c-r2nrk\" (UID: \"ecf71cdc-e20e-4eea-a978-1c6b126bf599\") " pod="calico-system/whisker-678dd9665c-r2nrk" Mar 7 01:04:48.093485 kubelet[2610]: I0307 01:04:48.004191 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0dcea30e-abc3-43fe-b161-0a975c6561d9-tigera-ca-bundle\") pod \"calico-kube-controllers-594ffc4984-hbs6d\" (UID: \"0dcea30e-abc3-43fe-b161-0a975c6561d9\") " pod="calico-system/calico-kube-controllers-594ffc4984-hbs6d" Mar 7 01:04:48.093485 kubelet[2610]: I0307 01:04:48.004240 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecf71cdc-e20e-4eea-a978-1c6b126bf599-whisker-ca-bundle\") pod \"whisker-678dd9665c-r2nrk\" (UID: \"ecf71cdc-e20e-4eea-a978-1c6b126bf599\") " pod="calico-system/whisker-678dd9665c-r2nrk" Mar 7 01:04:48.093485 kubelet[2610]: I0307 01:04:48.004375 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf842\" (UniqueName: \"kubernetes.io/projected/0dcea30e-abc3-43fe-b161-0a975c6561d9-kube-api-access-bf842\") pod \"calico-kube-controllers-594ffc4984-hbs6d\" (UID: \"0dcea30e-abc3-43fe-b161-0a975c6561d9\") " pod="calico-system/calico-kube-controllers-594ffc4984-hbs6d" Mar 7 01:04:48.093485 kubelet[2610]: I0307 01:04:48.004465 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ecf71cdc-e20e-4eea-a978-1c6b126bf599-nginx-config\") pod \"whisker-678dd9665c-r2nrk\" (UID: \"ecf71cdc-e20e-4eea-a978-1c6b126bf599\") " pod="calico-system/whisker-678dd9665c-r2nrk" Mar 7 01:04:48.093938 kubelet[2610]: I0307 01:04:48.004513 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km48w\" (UniqueName: \"kubernetes.io/projected/ecf71cdc-e20e-4eea-a978-1c6b126bf599-kube-api-access-km48w\") pod \"whisker-678dd9665c-r2nrk\" (UID: \"ecf71cdc-e20e-4eea-a978-1c6b126bf599\") " pod="calico-system/whisker-678dd9665c-r2nrk" Mar 7 01:04:48.168062 systemd[1]: Created slice kubepods-besteffort-pod286f9f87_acb9_4bde_81b8_c11f70245864.slice - libcontainer container kubepods-besteffort-pod286f9f87_acb9_4bde_81b8_c11f70245864.slice. Mar 7 01:04:48.207078 kubelet[2610]: I0307 01:04:48.206849 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/286f9f87-acb9-4bde-81b8-c11f70245864-calico-apiserver-certs\") pod \"calico-apiserver-85c759574b-shlqv\" (UID: \"286f9f87-acb9-4bde-81b8-c11f70245864\") " pod="calico-system/calico-apiserver-85c759574b-shlqv" Mar 7 01:04:48.207078 kubelet[2610]: I0307 01:04:48.206923 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chrhp\" (UniqueName: \"kubernetes.io/projected/286f9f87-acb9-4bde-81b8-c11f70245864-kube-api-access-chrhp\") pod \"calico-apiserver-85c759574b-shlqv\" (UID: \"286f9f87-acb9-4bde-81b8-c11f70245864\") " pod="calico-system/calico-apiserver-85c759574b-shlqv" Mar 7 01:04:48.244275 containerd[1476]: time="2026-03-07T01:04:48.243565583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-594ffc4984-hbs6d,Uid:0dcea30e-abc3-43fe-b161-0a975c6561d9,Namespace:calico-system,Attempt:0,}" Mar 7 01:04:48.248912 containerd[1476]: time="2026-03-07T01:04:48.248843086Z" level=info msg="shim disconnected" id=927a65630701fab6ebbf81df0a0aea6016052d4c9f1348ab3ec53ef05d3c7341 namespace=k8s.io Mar 7 01:04:48.249276 containerd[1476]: time="2026-03-07T01:04:48.249007585Z" level=warning msg="cleaning up after shim disconnected" id=927a65630701fab6ebbf81df0a0aea6016052d4c9f1348ab3ec53ef05d3c7341 namespace=k8s.io Mar 7 01:04:48.249276 containerd[1476]: time="2026-03-07T01:04:48.249029043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:04:48.263558 systemd[1]: Created slice kubepods-burstable-pod233af786_f839_4b49_bfb9_77d5d44842dc.slice - libcontainer container kubepods-burstable-pod233af786_f839_4b49_bfb9_77d5d44842dc.slice. Mar 7 01:04:48.304592 systemd[1]: Created slice kubepods-besteffort-pod9d68eacf_53a9_41f5_a9a3_d1b563899713.slice - libcontainer container kubepods-besteffort-pod9d68eacf_53a9_41f5_a9a3_d1b563899713.slice. Mar 7 01:04:48.314424 kubelet[2610]: I0307 01:04:48.314299 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9-calico-apiserver-certs\") pod \"calico-apiserver-85c759574b-842dd\" (UID: \"f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9\") " pod="calico-system/calico-apiserver-85c759574b-842dd" Mar 7 01:04:48.314424 kubelet[2610]: I0307 01:04:48.314391 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9d68eacf-53a9-41f5-a9a3-d1b563899713-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-zlg5q\" (UID: \"9d68eacf-53a9-41f5-a9a3-d1b563899713\") " pod="calico-system/goldmane-cccfbd5cf-zlg5q" Mar 7 01:04:48.314424 kubelet[2610]: I0307 01:04:48.314425 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6smtr\" (UniqueName: \"kubernetes.io/projected/9d68eacf-53a9-41f5-a9a3-d1b563899713-kube-api-access-6smtr\") pod \"goldmane-cccfbd5cf-zlg5q\" (UID: \"9d68eacf-53a9-41f5-a9a3-d1b563899713\") " pod="calico-system/goldmane-cccfbd5cf-zlg5q" Mar 7 01:04:48.315236 kubelet[2610]: I0307 01:04:48.314450 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68nph\" (UniqueName: \"kubernetes.io/projected/233af786-f839-4b49-bfb9-77d5d44842dc-kube-api-access-68nph\") pod \"coredns-66bc5c9577-r4h6f\" (UID: \"233af786-f839-4b49-bfb9-77d5d44842dc\") " pod="kube-system/coredns-66bc5c9577-r4h6f" Mar 7 01:04:48.317496 kubelet[2610]: I0307 01:04:48.314481 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/680dd4ad-45eb-49a1-b4c7-db4a6b1269ec-config-volume\") pod \"coredns-66bc5c9577-v5hrm\" (UID: \"680dd4ad-45eb-49a1-b4c7-db4a6b1269ec\") " pod="kube-system/coredns-66bc5c9577-v5hrm" Mar 7 01:04:48.317496 kubelet[2610]: I0307 01:04:48.317384 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d68eacf-53a9-41f5-a9a3-d1b563899713-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-zlg5q\" (UID: \"9d68eacf-53a9-41f5-a9a3-d1b563899713\") " pod="calico-system/goldmane-cccfbd5cf-zlg5q" Mar 7 01:04:48.319271 kubelet[2610]: I0307 01:04:48.317464 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/233af786-f839-4b49-bfb9-77d5d44842dc-config-volume\") pod \"coredns-66bc5c9577-r4h6f\" (UID: \"233af786-f839-4b49-bfb9-77d5d44842dc\") " pod="kube-system/coredns-66bc5c9577-r4h6f" Mar 7 01:04:48.319271 kubelet[2610]: I0307 01:04:48.317901 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hn7w\" (UniqueName: \"kubernetes.io/projected/680dd4ad-45eb-49a1-b4c7-db4a6b1269ec-kube-api-access-9hn7w\") pod \"coredns-66bc5c9577-v5hrm\" (UID: \"680dd4ad-45eb-49a1-b4c7-db4a6b1269ec\") " pod="kube-system/coredns-66bc5c9577-v5hrm" Mar 7 01:04:48.319271 kubelet[2610]: I0307 01:04:48.317967 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rhtj\" (UniqueName: \"kubernetes.io/projected/f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9-kube-api-access-6rhtj\") pod \"calico-apiserver-85c759574b-842dd\" (UID: \"f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9\") " pod="calico-system/calico-apiserver-85c759574b-842dd" Mar 7 01:04:48.319271 kubelet[2610]: I0307 01:04:48.317996 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d68eacf-53a9-41f5-a9a3-d1b563899713-config\") pod \"goldmane-cccfbd5cf-zlg5q\" (UID: \"9d68eacf-53a9-41f5-a9a3-d1b563899713\") " pod="calico-system/goldmane-cccfbd5cf-zlg5q" Mar 7 01:04:48.335512 systemd[1]: Created slice kubepods-besteffort-podf43e7b82_4264_485d_8e5b_8aa4fa5b5ef9.slice - libcontainer container kubepods-besteffort-podf43e7b82_4264_485d_8e5b_8aa4fa5b5ef9.slice. Mar 7 01:04:48.356996 systemd[1]: Created slice kubepods-burstable-pod680dd4ad_45eb_49a1_b4c7_db4a6b1269ec.slice - libcontainer container kubepods-burstable-pod680dd4ad_45eb_49a1_b4c7_db4a6b1269ec.slice. Mar 7 01:04:48.375006 systemd[1]: Created slice kubepods-besteffort-podfad7ec34_4cf5_4a59_a390_83631ed6b6c6.slice - libcontainer container kubepods-besteffort-podfad7ec34_4cf5_4a59_a390_83631ed6b6c6.slice. Mar 7 01:04:48.394360 containerd[1476]: time="2026-03-07T01:04:48.393581357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhvgj,Uid:fad7ec34-4cf5-4a59-a390-83631ed6b6c6,Namespace:calico-system,Attempt:0,}" Mar 7 01:04:48.402196 containerd[1476]: time="2026-03-07T01:04:48.402124754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-678dd9665c-r2nrk,Uid:ecf71cdc-e20e-4eea-a978-1c6b126bf599,Namespace:calico-system,Attempt:0,}" Mar 7 01:04:48.481898 containerd[1476]: time="2026-03-07T01:04:48.481754341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c759574b-shlqv,Uid:286f9f87-acb9-4bde-81b8-c11f70245864,Namespace:calico-system,Attempt:0,}" Mar 7 01:04:48.514481 containerd[1476]: time="2026-03-07T01:04:48.514418159Z" level=error msg="Failed to destroy network for sandbox \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.516155 containerd[1476]: time="2026-03-07T01:04:48.516000240Z" level=error msg="encountered an error cleaning up failed sandbox \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.516155 containerd[1476]: time="2026-03-07T01:04:48.516085215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-594ffc4984-hbs6d,Uid:0dcea30e-abc3-43fe-b161-0a975c6561d9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.517385 kubelet[2610]: E0307 01:04:48.516447 2610 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.517385 kubelet[2610]: E0307 01:04:48.516544 2610 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-594ffc4984-hbs6d" Mar 7 01:04:48.517385 kubelet[2610]: E0307 01:04:48.516580 2610 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-594ffc4984-hbs6d" Mar 7 01:04:48.517652 kubelet[2610]: E0307 01:04:48.516660 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-594ffc4984-hbs6d_calico-system(0dcea30e-abc3-43fe-b161-0a975c6561d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-594ffc4984-hbs6d_calico-system(0dcea30e-abc3-43fe-b161-0a975c6561d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-594ffc4984-hbs6d" podUID="0dcea30e-abc3-43fe-b161-0a975c6561d9" Mar 7 01:04:48.576222 kubelet[2610]: I0307 01:04:48.574870 2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Mar 7 01:04:48.578720 containerd[1476]: time="2026-03-07T01:04:48.577707742Z" level=info msg="StopPodSandbox for \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\"" Mar 7 01:04:48.587450 containerd[1476]: time="2026-03-07T01:04:48.585890369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-r4h6f,Uid:233af786-f839-4b49-bfb9-77d5d44842dc,Namespace:kube-system,Attempt:0,}" Mar 7 01:04:48.594438 containerd[1476]: time="2026-03-07T01:04:48.594390020Z" level=info msg="Ensure that sandbox 7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a in task-service has been cleanup successfully" Mar 7 01:04:48.604935 containerd[1476]: time="2026-03-07T01:04:48.604878843Z" level=info msg="CreateContainer within sandbox \"5f7336df0787aca2d573c0c2011a7d5d93d694269887f6afd05c86f49b1cbf3e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:04:48.623148 containerd[1476]: time="2026-03-07T01:04:48.623086391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-zlg5q,Uid:9d68eacf-53a9-41f5-a9a3-d1b563899713,Namespace:calico-system,Attempt:0,}" Mar 7 01:04:48.653177 containerd[1476]: time="2026-03-07T01:04:48.652761950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c759574b-842dd,Uid:f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9,Namespace:calico-system,Attempt:0,}" Mar 7 01:04:48.670927 containerd[1476]: time="2026-03-07T01:04:48.670852655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v5hrm,Uid:680dd4ad-45eb-49a1-b4c7-db4a6b1269ec,Namespace:kube-system,Attempt:0,}" Mar 7 01:04:48.713561 containerd[1476]: time="2026-03-07T01:04:48.713490336Z" level=info msg="CreateContainer within sandbox \"5f7336df0787aca2d573c0c2011a7d5d93d694269887f6afd05c86f49b1cbf3e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7a0e657561bf79a421389b341ee1682afcf4a8b3a71f56e7e646139ce9a9144c\"" Mar 7 01:04:48.723369 containerd[1476]: time="2026-03-07T01:04:48.723043726Z" level=info msg="StartContainer for \"7a0e657561bf79a421389b341ee1682afcf4a8b3a71f56e7e646139ce9a9144c\"" Mar 7 01:04:48.752710 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a-shm.mount: Deactivated successfully. Mar 7 01:04:48.832400 containerd[1476]: time="2026-03-07T01:04:48.830896998Z" level=error msg="Failed to destroy network for sandbox \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.838171 containerd[1476]: time="2026-03-07T01:04:48.836993955Z" level=error msg="encountered an error cleaning up failed sandbox \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.838171 containerd[1476]: time="2026-03-07T01:04:48.837112314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-678dd9665c-r2nrk,Uid:ecf71cdc-e20e-4eea-a978-1c6b126bf599,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.838150 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0-shm.mount: Deactivated successfully. Mar 7 01:04:48.839955 kubelet[2610]: E0307 01:04:48.839909 2610 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.841009 kubelet[2610]: E0307 01:04:48.839985 2610 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-678dd9665c-r2nrk" Mar 7 01:04:48.841009 kubelet[2610]: E0307 01:04:48.840014 2610 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-678dd9665c-r2nrk" Mar 7 01:04:48.841009 kubelet[2610]: E0307 01:04:48.840095 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-678dd9665c-r2nrk_calico-system(ecf71cdc-e20e-4eea-a978-1c6b126bf599)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-678dd9665c-r2nrk_calico-system(ecf71cdc-e20e-4eea-a978-1c6b126bf599)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-678dd9665c-r2nrk" podUID="ecf71cdc-e20e-4eea-a978-1c6b126bf599" Mar 7 01:04:48.912720 containerd[1476]: time="2026-03-07T01:04:48.912471882Z" level=error msg="Failed to destroy network for sandbox \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.913004 containerd[1476]: time="2026-03-07T01:04:48.912957164Z" level=error msg="encountered an error cleaning up failed sandbox \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.913101 containerd[1476]: time="2026-03-07T01:04:48.913040618Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhvgj,Uid:fad7ec34-4cf5-4a59-a390-83631ed6b6c6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.913566 kubelet[2610]: E0307 01:04:48.913393 2610 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.913566 kubelet[2610]: E0307 01:04:48.913460 2610 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hhvgj" Mar 7 01:04:48.913566 kubelet[2610]: E0307 01:04:48.913495 2610 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hhvgj" Mar 7 01:04:48.913836 kubelet[2610]: E0307 01:04:48.913612 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hhvgj_calico-system(fad7ec34-4cf5-4a59-a390-83631ed6b6c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hhvgj_calico-system(fad7ec34-4cf5-4a59-a390-83631ed6b6c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hhvgj" podUID="fad7ec34-4cf5-4a59-a390-83631ed6b6c6" Mar 7 01:04:48.923634 containerd[1476]: time="2026-03-07T01:04:48.922274481Z" level=error msg="StopPodSandbox for \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\" failed" error="failed to destroy network for sandbox \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.923795 kubelet[2610]: E0307 01:04:48.922620 2610 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Mar 7 01:04:48.923795 kubelet[2610]: E0307 01:04:48.922785 2610 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a"} Mar 7 01:04:48.923795 kubelet[2610]: E0307 01:04:48.923514 2610 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0dcea30e-abc3-43fe-b161-0a975c6561d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:04:48.923795 kubelet[2610]: E0307 01:04:48.923581 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0dcea30e-abc3-43fe-b161-0a975c6561d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-594ffc4984-hbs6d" podUID="0dcea30e-abc3-43fe-b161-0a975c6561d9" Mar 7 01:04:48.979383 containerd[1476]: time="2026-03-07T01:04:48.978558515Z" level=error msg="Failed to destroy network for sandbox \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.982725 containerd[1476]: time="2026-03-07T01:04:48.982654868Z" level=error msg="encountered an error cleaning up failed sandbox \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.983027 containerd[1476]: time="2026-03-07T01:04:48.982744037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c759574b-shlqv,Uid:286f9f87-acb9-4bde-81b8-c11f70245864,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.983223 kubelet[2610]: E0307 01:04:48.983040 2610 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:48.983223 kubelet[2610]: E0307 01:04:48.983111 2610 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-85c759574b-shlqv" Mar 7 01:04:48.983223 kubelet[2610]: E0307 01:04:48.983145 2610 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-85c759574b-shlqv" Mar 7 01:04:48.984781 kubelet[2610]: E0307 01:04:48.983245 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85c759574b-shlqv_calico-system(286f9f87-acb9-4bde-81b8-c11f70245864)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85c759574b-shlqv_calico-system(286f9f87-acb9-4bde-81b8-c11f70245864)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-85c759574b-shlqv" podUID="286f9f87-acb9-4bde-81b8-c11f70245864" Mar 7 01:04:49.011852 systemd[1]: Started cri-containerd-7a0e657561bf79a421389b341ee1682afcf4a8b3a71f56e7e646139ce9a9144c.scope - libcontainer container 7a0e657561bf79a421389b341ee1682afcf4a8b3a71f56e7e646139ce9a9144c. Mar 7 01:04:49.081400 containerd[1476]: time="2026-03-07T01:04:49.081186274Z" level=error msg="Failed to destroy network for sandbox \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.084368 containerd[1476]: time="2026-03-07T01:04:49.081916419Z" level=error msg="encountered an error cleaning up failed sandbox \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.084368 containerd[1476]: time="2026-03-07T01:04:49.082005746Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v5hrm,Uid:680dd4ad-45eb-49a1-b4c7-db4a6b1269ec,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.084744 kubelet[2610]: E0307 01:04:49.082278 2610 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.084744 kubelet[2610]: E0307 01:04:49.082417 2610 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-v5hrm" Mar 7 01:04:49.084744 kubelet[2610]: E0307 01:04:49.082482 2610 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-v5hrm" Mar 7 01:04:49.084983 kubelet[2610]: E0307 01:04:49.082584 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-v5hrm_kube-system(680dd4ad-45eb-49a1-b4c7-db4a6b1269ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-v5hrm_kube-system(680dd4ad-45eb-49a1-b4c7-db4a6b1269ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-v5hrm" podUID="680dd4ad-45eb-49a1-b4c7-db4a6b1269ec" Mar 7 01:04:49.103954 containerd[1476]: time="2026-03-07T01:04:49.103535844Z" level=error msg="Failed to destroy network for sandbox \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.104130 containerd[1476]: time="2026-03-07T01:04:49.103993942Z" level=error msg="encountered an error cleaning up failed sandbox \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.104130 containerd[1476]: time="2026-03-07T01:04:49.104079820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-r4h6f,Uid:233af786-f839-4b49-bfb9-77d5d44842dc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.106221 kubelet[2610]: E0307 01:04:49.106008 2610 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.106221 kubelet[2610]: E0307 01:04:49.106118 2610 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-r4h6f" Mar 7 01:04:49.106221 kubelet[2610]: E0307 01:04:49.106194 2610 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-r4h6f" Mar 7 01:04:49.106746 kubelet[2610]: E0307 01:04:49.106301 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-r4h6f_kube-system(233af786-f839-4b49-bfb9-77d5d44842dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-r4h6f_kube-system(233af786-f839-4b49-bfb9-77d5d44842dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-r4h6f" podUID="233af786-f839-4b49-bfb9-77d5d44842dc" Mar 7 01:04:49.120491 containerd[1476]: time="2026-03-07T01:04:49.119102756Z" level=info msg="StartContainer for \"7a0e657561bf79a421389b341ee1682afcf4a8b3a71f56e7e646139ce9a9144c\" returns successfully" Mar 7 01:04:49.143281 containerd[1476]: time="2026-03-07T01:04:49.143185732Z" level=error msg="Failed to destroy network for sandbox \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.144587 containerd[1476]: time="2026-03-07T01:04:49.143870233Z" level=error msg="encountered an error cleaning up failed sandbox \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.144587 containerd[1476]: time="2026-03-07T01:04:49.143940910Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c759574b-842dd,Uid:f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.144783 kubelet[2610]: E0307 01:04:49.144633 2610 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.144928 kubelet[2610]: E0307 01:04:49.144873 2610 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-85c759574b-842dd" Mar 7 01:04:49.145029 kubelet[2610]: E0307 01:04:49.144973 2610 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-85c759574b-842dd" Mar 7 01:04:49.146465 kubelet[2610]: E0307 01:04:49.146326 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85c759574b-842dd_calico-system(f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85c759574b-842dd_calico-system(f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-85c759574b-842dd" podUID="f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9" Mar 7 01:04:49.164801 containerd[1476]: time="2026-03-07T01:04:49.164716932Z" level=error msg="Failed to destroy network for sandbox \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.165636 containerd[1476]: time="2026-03-07T01:04:49.165564765Z" level=error msg="encountered an error cleaning up failed sandbox \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.165879 containerd[1476]: time="2026-03-07T01:04:49.165825087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-zlg5q,Uid:9d68eacf-53a9-41f5-a9a3-d1b563899713,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.167837 kubelet[2610]: E0307 01:04:49.166492 2610 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:04:49.167837 kubelet[2610]: E0307 01:04:49.166564 2610 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-zlg5q" Mar 7 01:04:49.167837 kubelet[2610]: E0307 01:04:49.166607 2610 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-zlg5q" Mar 7 01:04:49.168092 kubelet[2610]: E0307 01:04:49.166691 2610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-zlg5q_calico-system(9d68eacf-53a9-41f5-a9a3-d1b563899713)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-zlg5q_calico-system(9d68eacf-53a9-41f5-a9a3-d1b563899713)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-zlg5q" podUID="9d68eacf-53a9-41f5-a9a3-d1b563899713" Mar 7 01:04:49.579901 kubelet[2610]: I0307 01:04:49.579855 2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Mar 7 01:04:49.581028 containerd[1476]: time="2026-03-07T01:04:49.580983583Z" level=info msg="StopPodSandbox for \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\"" Mar 7 01:04:49.581279 containerd[1476]: time="2026-03-07T01:04:49.581238502Z" level=info msg="Ensure that sandbox 9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1 in task-service has been cleanup successfully" Mar 7 01:04:49.584362 kubelet[2610]: I0307 01:04:49.584279 2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Mar 7 01:04:49.587361 containerd[1476]: time="2026-03-07T01:04:49.586715898Z" level=info msg="StopPodSandbox for \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\"" Mar 7 01:04:49.587361 containerd[1476]: time="2026-03-07T01:04:49.587078288Z" level=info msg="Ensure that sandbox 8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de in task-service has been cleanup successfully" Mar 7 01:04:49.590438 kubelet[2610]: I0307 01:04:49.590397 2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Mar 7 01:04:49.592144 containerd[1476]: time="2026-03-07T01:04:49.591509157Z" level=info msg="StopPodSandbox for \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\"" Mar 7 01:04:49.592144 containerd[1476]: time="2026-03-07T01:04:49.591755772Z" level=info msg="Ensure that sandbox fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191 in task-service has been cleanup successfully" Mar 7 01:04:49.595651 kubelet[2610]: I0307 01:04:49.595508 2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Mar 7 01:04:49.598921 containerd[1476]: time="2026-03-07T01:04:49.598500787Z" level=info msg="StopPodSandbox for \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\"" Mar 7 01:04:49.602937 containerd[1476]: time="2026-03-07T01:04:49.602522428Z" level=info msg="Ensure that sandbox 90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3 in task-service has been cleanup successfully" Mar 7 01:04:49.603656 kubelet[2610]: I0307 01:04:49.603628 2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Mar 7 01:04:49.605890 containerd[1476]: time="2026-03-07T01:04:49.605837580Z" level=info msg="StopPodSandbox for \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\"" Mar 7 01:04:49.607837 containerd[1476]: time="2026-03-07T01:04:49.607489836Z" level=info msg="Ensure that sandbox c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0 in task-service has been cleanup successfully" Mar 7 01:04:49.634491 kubelet[2610]: I0307 01:04:49.634458 2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Mar 7 01:04:49.642325 containerd[1476]: time="2026-03-07T01:04:49.641288535Z" level=info msg="StopPodSandbox for \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\"" Mar 7 01:04:49.688696 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5-shm.mount: Deactivated successfully. Mar 7 01:04:49.688859 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1-shm.mount: Deactivated successfully. Mar 7 01:04:49.688984 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3-shm.mount: Deactivated successfully. Mar 7 01:04:49.689095 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd-shm.mount: Deactivated successfully. Mar 7 01:04:49.690474 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de-shm.mount: Deactivated successfully. Mar 7 01:04:49.690643 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191-shm.mount: Deactivated successfully. Mar 7 01:04:49.715193 containerd[1476]: time="2026-03-07T01:04:49.714865260Z" level=info msg="Ensure that sandbox acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5 in task-service has been cleanup successfully" Mar 7 01:04:49.717906 kubelet[2610]: I0307 01:04:49.717871 2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Mar 7 01:04:49.722072 containerd[1476]: time="2026-03-07T01:04:49.722028293Z" level=info msg="StopPodSandbox for \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\"" Mar 7 01:04:49.726692 containerd[1476]: time="2026-03-07T01:04:49.726644618Z" level=info msg="Ensure that sandbox 94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd in task-service has been cleanup successfully" Mar 7 01:04:49.769115 kubelet[2610]: I0307 01:04:49.768444 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d4bqh" podStartSLOduration=4.355337957 podStartE2EDuration="21.768421652s" podCreationTimestamp="2026-03-07 01:04:28 +0000 UTC" firstStartedPulling="2026-03-07 01:04:29.032881506 +0000 UTC m=+21.898446475" lastFinishedPulling="2026-03-07 01:04:46.445965223 +0000 UTC m=+39.311530170" observedRunningTime="2026-03-07 01:04:49.690165719 +0000 UTC m=+42.555730692" watchObservedRunningTime="2026-03-07 01:04:49.768421652 +0000 UTC m=+42.633986626" Mar 7 01:04:50.104585 containerd[1476]: 2026-03-07 01:04:49.770 [INFO][3821] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Mar 7 01:04:50.104585 containerd[1476]: 2026-03-07 01:04:49.774 [INFO][3821] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" iface="eth0" netns="/var/run/netns/cni-ddf6470f-4747-a236-6d40-895f44d15f1a" Mar 7 01:04:50.104585 containerd[1476]: 2026-03-07 01:04:49.777 [INFO][3821] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" iface="eth0" netns="/var/run/netns/cni-ddf6470f-4747-a236-6d40-895f44d15f1a" Mar 7 01:04:50.104585 containerd[1476]: 2026-03-07 01:04:49.777 [INFO][3821] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" iface="eth0" netns="/var/run/netns/cni-ddf6470f-4747-a236-6d40-895f44d15f1a" Mar 7 01:04:50.104585 containerd[1476]: 2026-03-07 01:04:49.777 [INFO][3821] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Mar 7 01:04:50.104585 containerd[1476]: 2026-03-07 01:04:49.777 [INFO][3821] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Mar 7 01:04:50.104585 containerd[1476]: 2026-03-07 01:04:50.030 [INFO][3853] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" HandleID="k8s-pod-network.90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:04:50.104585 containerd[1476]: 2026-03-07 01:04:50.032 [INFO][3853] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:04:50.104585 containerd[1476]: 2026-03-07 01:04:50.032 [INFO][3853] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:04:50.104585 containerd[1476]: 2026-03-07 01:04:50.058 [WARNING][3853] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" HandleID="k8s-pod-network.90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:04:50.104585 containerd[1476]: 2026-03-07 01:04:50.058 [INFO][3853] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" HandleID="k8s-pod-network.90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:04:50.104585 containerd[1476]: 2026-03-07 01:04:50.077 [INFO][3853] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:04:50.104585 containerd[1476]: 2026-03-07 01:04:50.100 [INFO][3821] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Mar 7 01:04:50.106827 containerd[1476]: time="2026-03-07T01:04:50.105682060Z" level=info msg="TearDown network for sandbox \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\" successfully" Mar 7 01:04:50.106827 containerd[1476]: time="2026-03-07T01:04:50.106551008Z" level=info msg="StopPodSandbox for \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\" returns successfully" Mar 7 01:04:50.120105 systemd[1]: run-netns-cni\x2dddf6470f\x2d4747\x2da236\x2d6d40\x2d895f44d15f1a.mount: Deactivated successfully. Mar 7 01:04:50.127565 containerd[1476]: time="2026-03-07T01:04:50.126648893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-zlg5q,Uid:9d68eacf-53a9-41f5-a9a3-d1b563899713,Namespace:calico-system,Attempt:1,}" Mar 7 01:04:50.154398 containerd[1476]: 2026-03-07 01:04:49.940 [INFO][3795] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Mar 7 01:04:50.154398 containerd[1476]: 2026-03-07 01:04:49.940 [INFO][3795] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" iface="eth0" netns="/var/run/netns/cni-45afe955-70e8-5ea3-d991-ea8e28d3b7b2" Mar 7 01:04:50.154398 containerd[1476]: 2026-03-07 01:04:49.940 [INFO][3795] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" iface="eth0" netns="/var/run/netns/cni-45afe955-70e8-5ea3-d991-ea8e28d3b7b2" Mar 7 01:04:50.154398 containerd[1476]: 2026-03-07 01:04:49.942 [INFO][3795] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" iface="eth0" netns="/var/run/netns/cni-45afe955-70e8-5ea3-d991-ea8e28d3b7b2" Mar 7 01:04:50.154398 containerd[1476]: 2026-03-07 01:04:49.942 [INFO][3795] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Mar 7 01:04:50.154398 containerd[1476]: 2026-03-07 01:04:49.942 [INFO][3795] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Mar 7 01:04:50.154398 containerd[1476]: 2026-03-07 01:04:50.067 [INFO][3882] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" HandleID="k8s-pod-network.9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:04:50.154398 containerd[1476]: 2026-03-07 01:04:50.068 [INFO][3882] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:04:50.154398 containerd[1476]: 2026-03-07 01:04:50.078 [INFO][3882] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:04:50.154398 containerd[1476]: 2026-03-07 01:04:50.116 [WARNING][3882] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" HandleID="k8s-pod-network.9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:04:50.154398 containerd[1476]: 2026-03-07 01:04:50.119 [INFO][3882] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" HandleID="k8s-pod-network.9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:04:50.154398 containerd[1476]: 2026-03-07 01:04:50.127 [INFO][3882] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:04:50.154398 containerd[1476]: 2026-03-07 01:04:50.143 [INFO][3795] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Mar 7 01:04:50.157383 containerd[1476]: time="2026-03-07T01:04:50.156418393Z" level=info msg="TearDown network for sandbox \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\" successfully" Mar 7 01:04:50.157383 containerd[1476]: time="2026-03-07T01:04:50.156467437Z" level=info msg="StopPodSandbox for \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\" returns successfully" Mar 7 01:04:50.165918 containerd[1476]: time="2026-03-07T01:04:50.165856199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c759574b-842dd,Uid:f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9,Namespace:calico-system,Attempt:1,}" Mar 7 01:04:50.166792 systemd[1]: run-netns-cni\x2d45afe955\x2d70e8\x2d5ea3\x2dd991\x2dea8e28d3b7b2.mount: Deactivated successfully. Mar 7 01:04:50.247885 containerd[1476]: 2026-03-07 01:04:49.974 [INFO][3861] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Mar 7 01:04:50.247885 containerd[1476]: 2026-03-07 01:04:49.976 [INFO][3861] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" iface="eth0" netns="/var/run/netns/cni-5f10fabd-2f52-48b9-c5fe-27c8d05bf313" Mar 7 01:04:50.247885 containerd[1476]: 2026-03-07 01:04:49.977 [INFO][3861] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" iface="eth0" netns="/var/run/netns/cni-5f10fabd-2f52-48b9-c5fe-27c8d05bf313" Mar 7 01:04:50.247885 containerd[1476]: 2026-03-07 01:04:49.978 [INFO][3861] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" iface="eth0" netns="/var/run/netns/cni-5f10fabd-2f52-48b9-c5fe-27c8d05bf313" Mar 7 01:04:50.247885 containerd[1476]: 2026-03-07 01:04:49.978 [INFO][3861] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Mar 7 01:04:50.247885 containerd[1476]: 2026-03-07 01:04:49.978 [INFO][3861] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Mar 7 01:04:50.247885 containerd[1476]: 2026-03-07 01:04:50.206 [INFO][3888] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" HandleID="k8s-pod-network.94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:04:50.247885 containerd[1476]: 2026-03-07 01:04:50.208 [INFO][3888] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:04:50.247885 containerd[1476]: 2026-03-07 01:04:50.208 [INFO][3888] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:04:50.247885 containerd[1476]: 2026-03-07 01:04:50.228 [WARNING][3888] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" HandleID="k8s-pod-network.94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:04:50.247885 containerd[1476]: 2026-03-07 01:04:50.228 [INFO][3888] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" HandleID="k8s-pod-network.94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:04:50.247885 containerd[1476]: 2026-03-07 01:04:50.231 [INFO][3888] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:04:50.247885 containerd[1476]: 2026-03-07 01:04:50.239 [INFO][3861] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Mar 7 01:04:50.249532 containerd[1476]: time="2026-03-07T01:04:50.249294929Z" level=info msg="TearDown network for sandbox \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\" successfully" Mar 7 01:04:50.249837 containerd[1476]: time="2026-03-07T01:04:50.249698408Z" level=info msg="StopPodSandbox for \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\" returns successfully" Mar 7 01:04:50.257952 containerd[1476]: time="2026-03-07T01:04:50.257443172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-r4h6f,Uid:233af786-f839-4b49-bfb9-77d5d44842dc,Namespace:kube-system,Attempt:1,}" Mar 7 01:04:50.270960 containerd[1476]: 2026-03-07 01:04:49.855 [INFO][3801] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Mar 7 01:04:50.270960 containerd[1476]: 2026-03-07 01:04:49.860 [INFO][3801] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" iface="eth0" netns="/var/run/netns/cni-65758f96-d292-ff5a-dfe3-b2bc638d1603" Mar 7 01:04:50.270960 containerd[1476]: 2026-03-07 01:04:49.860 [INFO][3801] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" iface="eth0" netns="/var/run/netns/cni-65758f96-d292-ff5a-dfe3-b2bc638d1603" Mar 7 01:04:50.270960 containerd[1476]: 2026-03-07 01:04:49.864 [INFO][3801] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" iface="eth0" netns="/var/run/netns/cni-65758f96-d292-ff5a-dfe3-b2bc638d1603" Mar 7 01:04:50.270960 containerd[1476]: 2026-03-07 01:04:49.864 [INFO][3801] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Mar 7 01:04:50.270960 containerd[1476]: 2026-03-07 01:04:49.864 [INFO][3801] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Mar 7 01:04:50.270960 containerd[1476]: 2026-03-07 01:04:50.206 [INFO][3875] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" HandleID="k8s-pod-network.8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:04:50.270960 containerd[1476]: 2026-03-07 01:04:50.216 [INFO][3875] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:04:50.270960 containerd[1476]: 2026-03-07 01:04:50.234 [INFO][3875] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:04:50.270960 containerd[1476]: 2026-03-07 01:04:50.254 [WARNING][3875] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" HandleID="k8s-pod-network.8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:04:50.270960 containerd[1476]: 2026-03-07 01:04:50.254 [INFO][3875] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" HandleID="k8s-pod-network.8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:04:50.270960 containerd[1476]: 2026-03-07 01:04:50.259 [INFO][3875] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:04:50.270960 containerd[1476]: 2026-03-07 01:04:50.263 [INFO][3801] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Mar 7 01:04:50.273312 containerd[1476]: time="2026-03-07T01:04:50.271127463Z" level=info msg="TearDown network for sandbox \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\" successfully" Mar 7 01:04:50.273312 containerd[1476]: time="2026-03-07T01:04:50.271202378Z" level=info msg="StopPodSandbox for \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\" returns successfully" Mar 7 01:04:50.275994 containerd[1476]: time="2026-03-07T01:04:50.275954309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c759574b-shlqv,Uid:286f9f87-acb9-4bde-81b8-c11f70245864,Namespace:calico-system,Attempt:1,}" Mar 7 01:04:50.309278 containerd[1476]: 2026-03-07 01:04:50.007 [INFO][3814] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Mar 7 01:04:50.309278 containerd[1476]: 2026-03-07 01:04:50.007 [INFO][3814] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" iface="eth0" netns="/var/run/netns/cni-2926cf56-5e4f-b512-f344-b531ef51eef2" Mar 7 01:04:50.309278 containerd[1476]: 2026-03-07 01:04:50.007 [INFO][3814] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" iface="eth0" netns="/var/run/netns/cni-2926cf56-5e4f-b512-f344-b531ef51eef2" Mar 7 01:04:50.309278 containerd[1476]: 2026-03-07 01:04:50.008 [INFO][3814] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" iface="eth0" netns="/var/run/netns/cni-2926cf56-5e4f-b512-f344-b531ef51eef2" Mar 7 01:04:50.309278 containerd[1476]: 2026-03-07 01:04:50.008 [INFO][3814] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Mar 7 01:04:50.309278 containerd[1476]: 2026-03-07 01:04:50.008 [INFO][3814] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Mar 7 01:04:50.309278 containerd[1476]: 2026-03-07 01:04:50.270 [INFO][3895] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" HandleID="k8s-pod-network.fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:04:50.309278 containerd[1476]: 2026-03-07 01:04:50.272 [INFO][3895] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:04:50.309278 containerd[1476]: 2026-03-07 01:04:50.274 [INFO][3895] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:04:50.309278 containerd[1476]: 2026-03-07 01:04:50.295 [WARNING][3895] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" HandleID="k8s-pod-network.fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:04:50.309278 containerd[1476]: 2026-03-07 01:04:50.295 [INFO][3895] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" HandleID="k8s-pod-network.fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:04:50.309278 containerd[1476]: 2026-03-07 01:04:50.302 [INFO][3895] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:04:50.309278 containerd[1476]: 2026-03-07 01:04:50.307 [INFO][3814] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Mar 7 01:04:50.311076 containerd[1476]: time="2026-03-07T01:04:50.310321717Z" level=info msg="TearDown network for sandbox \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\" successfully" Mar 7 01:04:50.311076 containerd[1476]: time="2026-03-07T01:04:50.310441770Z" level=info msg="StopPodSandbox for \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\" returns successfully" Mar 7 01:04:50.315384 containerd[1476]: time="2026-03-07T01:04:50.315216205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhvgj,Uid:fad7ec34-4cf5-4a59-a390-83631ed6b6c6,Namespace:calico-system,Attempt:1,}" Mar 7 01:04:50.378890 containerd[1476]: 2026-03-07 01:04:50.087 [INFO][3832] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Mar 7 01:04:50.378890 containerd[1476]: 2026-03-07 01:04:50.089 [INFO][3832] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" iface="eth0" netns="/var/run/netns/cni-e89e340a-b3c3-b57c-2a6e-8005c4a55cbb" Mar 7 01:04:50.378890 containerd[1476]: 2026-03-07 01:04:50.091 [INFO][3832] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" iface="eth0" netns="/var/run/netns/cni-e89e340a-b3c3-b57c-2a6e-8005c4a55cbb" Mar 7 01:04:50.378890 containerd[1476]: 2026-03-07 01:04:50.091 [INFO][3832] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" iface="eth0" netns="/var/run/netns/cni-e89e340a-b3c3-b57c-2a6e-8005c4a55cbb" Mar 7 01:04:50.378890 containerd[1476]: 2026-03-07 01:04:50.092 [INFO][3832] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Mar 7 01:04:50.378890 containerd[1476]: 2026-03-07 01:04:50.092 [INFO][3832] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Mar 7 01:04:50.378890 containerd[1476]: 2026-03-07 01:04:50.319 [INFO][3906] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" HandleID="k8s-pod-network.c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--678dd9665c--r2nrk-eth0" Mar 7 01:04:50.378890 containerd[1476]: 2026-03-07 01:04:50.329 [INFO][3906] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:04:50.378890 containerd[1476]: 2026-03-07 01:04:50.330 [INFO][3906] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:04:50.378890 containerd[1476]: 2026-03-07 01:04:50.359 [WARNING][3906] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" HandleID="k8s-pod-network.c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--678dd9665c--r2nrk-eth0" Mar 7 01:04:50.378890 containerd[1476]: 2026-03-07 01:04:50.359 [INFO][3906] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" HandleID="k8s-pod-network.c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--678dd9665c--r2nrk-eth0" Mar 7 01:04:50.378890 containerd[1476]: 2026-03-07 01:04:50.362 [INFO][3906] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:04:50.378890 containerd[1476]: 2026-03-07 01:04:50.366 [INFO][3832] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Mar 7 01:04:50.378890 containerd[1476]: time="2026-03-07T01:04:50.376054013Z" level=info msg="TearDown network for sandbox \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\" successfully" Mar 7 01:04:50.378890 containerd[1476]: time="2026-03-07T01:04:50.376088647Z" level=info msg="StopPodSandbox for \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\" returns successfully" Mar 7 01:04:50.432605 containerd[1476]: 2026-03-07 01:04:50.131 [INFO][3854] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Mar 7 01:04:50.432605 containerd[1476]: 2026-03-07 01:04:50.131 [INFO][3854] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" iface="eth0" netns="/var/run/netns/cni-09458150-23da-2dc9-1a06-6d0cb032cfc0" Mar 7 01:04:50.432605 containerd[1476]: 2026-03-07 01:04:50.133 [INFO][3854] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" iface="eth0" netns="/var/run/netns/cni-09458150-23da-2dc9-1a06-6d0cb032cfc0" Mar 7 01:04:50.432605 containerd[1476]: 2026-03-07 01:04:50.138 [INFO][3854] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" iface="eth0" netns="/var/run/netns/cni-09458150-23da-2dc9-1a06-6d0cb032cfc0" Mar 7 01:04:50.432605 containerd[1476]: 2026-03-07 01:04:50.138 [INFO][3854] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Mar 7 01:04:50.432605 containerd[1476]: 2026-03-07 01:04:50.138 [INFO][3854] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Mar 7 01:04:50.432605 containerd[1476]: 2026-03-07 01:04:50.373 [INFO][3911] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" HandleID="k8s-pod-network.acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:04:50.432605 containerd[1476]: 2026-03-07 01:04:50.374 [INFO][3911] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:04:50.432605 containerd[1476]: 2026-03-07 01:04:50.374 [INFO][3911] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:04:50.432605 containerd[1476]: 2026-03-07 01:04:50.414 [WARNING][3911] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" HandleID="k8s-pod-network.acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:04:50.432605 containerd[1476]: 2026-03-07 01:04:50.414 [INFO][3911] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" HandleID="k8s-pod-network.acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:04:50.432605 containerd[1476]: 2026-03-07 01:04:50.417 [INFO][3911] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:04:50.432605 containerd[1476]: 2026-03-07 01:04:50.428 [INFO][3854] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Mar 7 01:04:50.433839 containerd[1476]: time="2026-03-07T01:04:50.433567093Z" level=info msg="TearDown network for sandbox \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\" successfully" Mar 7 01:04:50.433839 containerd[1476]: time="2026-03-07T01:04:50.433620342Z" level=info msg="StopPodSandbox for \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\" returns successfully" Mar 7 01:04:50.437402 containerd[1476]: time="2026-03-07T01:04:50.436935613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v5hrm,Uid:680dd4ad-45eb-49a1-b4c7-db4a6b1269ec,Namespace:kube-system,Attempt:1,}" Mar 7 01:04:50.438244 kubelet[2610]: I0307 01:04:50.438196 2610 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km48w\" (UniqueName: \"kubernetes.io/projected/ecf71cdc-e20e-4eea-a978-1c6b126bf599-kube-api-access-km48w\") pod \"ecf71cdc-e20e-4eea-a978-1c6b126bf599\" (UID: \"ecf71cdc-e20e-4eea-a978-1c6b126bf599\") " Mar 7 01:04:50.439366 kubelet[2610]: I0307 01:04:50.438878 2610 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecf71cdc-e20e-4eea-a978-1c6b126bf599-whisker-ca-bundle\") pod \"ecf71cdc-e20e-4eea-a978-1c6b126bf599\" (UID: \"ecf71cdc-e20e-4eea-a978-1c6b126bf599\") " Mar 7 01:04:50.439366 kubelet[2610]: I0307 01:04:50.438934 2610 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ecf71cdc-e20e-4eea-a978-1c6b126bf599-whisker-backend-key-pair\") pod \"ecf71cdc-e20e-4eea-a978-1c6b126bf599\" (UID: \"ecf71cdc-e20e-4eea-a978-1c6b126bf599\") " Mar 7 01:04:50.439366 kubelet[2610]: I0307 01:04:50.438988 2610 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ecf71cdc-e20e-4eea-a978-1c6b126bf599-nginx-config\") pod \"ecf71cdc-e20e-4eea-a978-1c6b126bf599\" (UID: \"ecf71cdc-e20e-4eea-a978-1c6b126bf599\") " Mar 7 01:04:50.439797 kubelet[2610]: I0307 01:04:50.439763 2610 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecf71cdc-e20e-4eea-a978-1c6b126bf599-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "ecf71cdc-e20e-4eea-a978-1c6b126bf599" (UID: "ecf71cdc-e20e-4eea-a978-1c6b126bf599"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:04:50.440436 kubelet[2610]: I0307 01:04:50.440405 2610 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecf71cdc-e20e-4eea-a978-1c6b126bf599-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ecf71cdc-e20e-4eea-a978-1c6b126bf599" (UID: "ecf71cdc-e20e-4eea-a978-1c6b126bf599"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:04:50.449103 kubelet[2610]: I0307 01:04:50.449063 2610 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf71cdc-e20e-4eea-a978-1c6b126bf599-kube-api-access-km48w" (OuterVolumeSpecName: "kube-api-access-km48w") pod "ecf71cdc-e20e-4eea-a978-1c6b126bf599" (UID: "ecf71cdc-e20e-4eea-a978-1c6b126bf599"). InnerVolumeSpecName "kube-api-access-km48w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:04:50.454710 kubelet[2610]: I0307 01:04:50.454661 2610 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf71cdc-e20e-4eea-a978-1c6b126bf599-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ecf71cdc-e20e-4eea-a978-1c6b126bf599" (UID: "ecf71cdc-e20e-4eea-a978-1c6b126bf599"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:04:50.539813 kubelet[2610]: I0307 01:04:50.539643 2610 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ecf71cdc-e20e-4eea-a978-1c6b126bf599-nginx-config\") on node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" DevicePath \"\"" Mar 7 01:04:50.539813 kubelet[2610]: I0307 01:04:50.539718 2610 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-km48w\" (UniqueName: \"kubernetes.io/projected/ecf71cdc-e20e-4eea-a978-1c6b126bf599-kube-api-access-km48w\") on node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" DevicePath \"\"" Mar 7 01:04:50.539813 kubelet[2610]: I0307 01:04:50.539737 2610 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecf71cdc-e20e-4eea-a978-1c6b126bf599-whisker-ca-bundle\") on node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" DevicePath \"\"" Mar 7 01:04:50.539813 kubelet[2610]: I0307 01:04:50.539754 2610 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ecf71cdc-e20e-4eea-a978-1c6b126bf599-whisker-backend-key-pair\") on node \"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521\" DevicePath \"\"" Mar 7 01:04:50.708042 systemd[1]: run-netns-cni\x2d09458150\x2d23da\x2d2dc9\x2d1a06\x2d6d0cb032cfc0.mount: Deactivated successfully. Mar 7 01:04:50.710537 systemd[1]: run-netns-cni\x2d5f10fabd\x2d2f52\x2d48b9\x2dc5fe\x2d27c8d05bf313.mount: Deactivated successfully. Mar 7 01:04:50.710652 systemd[1]: run-netns-cni\x2d65758f96\x2dd292\x2dff5a\x2ddfe3\x2db2bc638d1603.mount: Deactivated successfully. Mar 7 01:04:50.710763 systemd[1]: run-netns-cni\x2de89e340a\x2db3c3\x2db57c\x2d2a6e\x2d8005c4a55cbb.mount: Deactivated successfully. Mar 7 01:04:50.710862 systemd[1]: run-netns-cni\x2d2926cf56\x2d5e4f\x2db512\x2df344\x2db531ef51eef2.mount: Deactivated successfully. Mar 7 01:04:50.710959 systemd[1]: var-lib-kubelet-pods-ecf71cdc\x2de20e\x2d4eea\x2da978\x2d1c6b126bf599-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkm48w.mount: Deactivated successfully. Mar 7 01:04:50.711067 systemd[1]: var-lib-kubelet-pods-ecf71cdc\x2de20e\x2d4eea\x2da978\x2d1c6b126bf599-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 01:04:50.721576 kubelet[2610]: I0307 01:04:50.721530 2610 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:04:50.751459 systemd[1]: Removed slice kubepods-besteffort-podecf71cdc_e20e_4eea_a978_1c6b126bf599.slice - libcontainer container kubepods-besteffort-podecf71cdc_e20e_4eea_a978_1c6b126bf599.slice. Mar 7 01:04:51.022970 systemd[1]: Created slice kubepods-besteffort-pod5999fee7_9f2f_45af_8795_39c31e7a9b29.slice - libcontainer container kubepods-besteffort-pod5999fee7_9f2f_45af_8795_39c31e7a9b29.slice. Mar 7 01:04:51.044158 kubelet[2610]: I0307 01:04:51.044099 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5999fee7-9f2f-45af-8795-39c31e7a9b29-whisker-backend-key-pair\") pod \"whisker-875ff5fdb-prpjv\" (UID: \"5999fee7-9f2f-45af-8795-39c31e7a9b29\") " pod="calico-system/whisker-875ff5fdb-prpjv" Mar 7 01:04:51.044328 kubelet[2610]: I0307 01:04:51.044174 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5999fee7-9f2f-45af-8795-39c31e7a9b29-nginx-config\") pod \"whisker-875ff5fdb-prpjv\" (UID: \"5999fee7-9f2f-45af-8795-39c31e7a9b29\") " pod="calico-system/whisker-875ff5fdb-prpjv" Mar 7 01:04:51.044328 kubelet[2610]: I0307 01:04:51.044204 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5999fee7-9f2f-45af-8795-39c31e7a9b29-whisker-ca-bundle\") pod \"whisker-875ff5fdb-prpjv\" (UID: \"5999fee7-9f2f-45af-8795-39c31e7a9b29\") " pod="calico-system/whisker-875ff5fdb-prpjv" Mar 7 01:04:51.044328 kubelet[2610]: I0307 01:04:51.044247 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6h2x\" (UniqueName: \"kubernetes.io/projected/5999fee7-9f2f-45af-8795-39c31e7a9b29-kube-api-access-t6h2x\") pod \"whisker-875ff5fdb-prpjv\" (UID: \"5999fee7-9f2f-45af-8795-39c31e7a9b29\") " pod="calico-system/whisker-875ff5fdb-prpjv" Mar 7 01:04:51.108795 systemd-networkd[1361]: calif14ec3435a4: Link UP Mar 7 01:04:51.109196 systemd-networkd[1361]: calif14ec3435a4: Gained carrier Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:50.428 [ERROR][3945] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:50.460 [INFO][3945] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0 calico-apiserver-85c759574b- calico-system 286f9f87-acb9-4bde-81b8-c11f70245864 928 0 2026-03-07 01:04:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85c759574b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521 calico-apiserver-85c759574b-shlqv eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif14ec3435a4 [] [] }} ContainerID="38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" Namespace="calico-system" Pod="calico-apiserver-85c759574b-shlqv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-" Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:50.461 [INFO][3945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" Namespace="calico-system" Pod="calico-apiserver-85c759574b-shlqv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:50.717 [INFO][3978] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" HandleID="k8s-pod-network.38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:50.783 [INFO][3978] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" HandleID="k8s-pod-network.38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a8530), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", "pod":"calico-apiserver-85c759574b-shlqv", "timestamp":"2026-03-07 01:04:50.717946294 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000112420)} Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:50.783 [INFO][3978] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:50.784 [INFO][3978] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:50.784 [INFO][3978] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521' Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:50.822 [INFO][3978] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:50.875 [INFO][3978] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:50.944 [INFO][3978] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:50.959 [INFO][3978] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:50.983 [INFO][3978] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:50.984 [INFO][3978] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:50.991 [INFO][3978] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:51.042 [INFO][3978] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.151385 containerd[1476]: 2026-03-07 01:04:51.079 [INFO][3978] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.193/26] block=192.168.26.192/26 handle="k8s-pod-network.38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.156880 containerd[1476]: 2026-03-07 01:04:51.079 [INFO][3978] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.193/26] handle="k8s-pod-network.38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.156880 containerd[1476]: 2026-03-07 01:04:51.080 [INFO][3978] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:04:51.156880 containerd[1476]: 2026-03-07 01:04:51.080 [INFO][3978] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.193/26] IPv6=[] ContainerID="38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" HandleID="k8s-pod-network.38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:04:51.156880 containerd[1476]: 2026-03-07 01:04:51.082 [INFO][3945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" Namespace="calico-system" Pod="calico-apiserver-85c759574b-shlqv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0", GenerateName:"calico-apiserver-85c759574b-", Namespace:"calico-system", SelfLink:"", UID:"286f9f87-acb9-4bde-81b8-c11f70245864", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c759574b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"", Pod:"calico-apiserver-85c759574b-shlqv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif14ec3435a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:04:51.156880 containerd[1476]: 2026-03-07 01:04:51.082 [INFO][3945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.193/32] ContainerID="38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" Namespace="calico-system" Pod="calico-apiserver-85c759574b-shlqv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:04:51.156880 containerd[1476]: 2026-03-07 01:04:51.082 [INFO][3945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif14ec3435a4 ContainerID="38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" Namespace="calico-system" Pod="calico-apiserver-85c759574b-shlqv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:04:51.156880 containerd[1476]: 2026-03-07 01:04:51.107 [INFO][3945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" Namespace="calico-system" Pod="calico-apiserver-85c759574b-shlqv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:04:51.159419 containerd[1476]: 2026-03-07 01:04:51.107 [INFO][3945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" Namespace="calico-system" Pod="calico-apiserver-85c759574b-shlqv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0", GenerateName:"calico-apiserver-85c759574b-", Namespace:"calico-system", SelfLink:"", UID:"286f9f87-acb9-4bde-81b8-c11f70245864", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c759574b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b", Pod:"calico-apiserver-85c759574b-shlqv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif14ec3435a4", MAC:"0e:85:f9:fa:71:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:04:51.159419 containerd[1476]: 2026-03-07 01:04:51.136 [INFO][3945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b" Namespace="calico-system" Pod="calico-apiserver-85c759574b-shlqv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:04:51.244428 containerd[1476]: time="2026-03-07T01:04:51.243162504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:04:51.244663 containerd[1476]: time="2026-03-07T01:04:51.244477073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:04:51.244663 containerd[1476]: time="2026-03-07T01:04:51.244547559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:51.244973 containerd[1476]: time="2026-03-07T01:04:51.244846658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:51.274079 systemd-networkd[1361]: calia0623d6e5be: Link UP Mar 7 01:04:51.276361 systemd-networkd[1361]: calia0623d6e5be: Gained carrier Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:50.426 [ERROR][3917] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:50.479 [INFO][3917] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0 goldmane-cccfbd5cf- calico-system 9d68eacf-53a9-41f5-a9a3-d1b563899713 927 0 2026-03-07 01:04:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521 goldmane-cccfbd5cf-zlg5q eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia0623d6e5be [] [] }} ContainerID="dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zlg5q" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-" Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:50.479 [INFO][3917] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zlg5q" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:50.740 [INFO][3984] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" HandleID="k8s-pod-network.dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:50.808 [INFO][3984] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" HandleID="k8s-pod-network.dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b3260), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", "pod":"goldmane-cccfbd5cf-zlg5q", "timestamp":"2026-03-07 01:04:50.740637103 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000414000)} Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:50.809 [INFO][3984] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:51.079 [INFO][3984] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:51.080 [INFO][3984] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521' Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:51.105 [INFO][3984] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:51.131 [INFO][3984] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:51.161 [INFO][3984] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:51.173 [INFO][3984] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:51.190 [INFO][3984] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:51.190 [INFO][3984] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:51.207 [INFO][3984] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797 Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:51.227 [INFO][3984] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:51.250 [INFO][3984] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.194/26] block=192.168.26.192/26 handle="k8s-pod-network.dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.331587 containerd[1476]: 2026-03-07 01:04:51.250 [INFO][3984] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.194/26] handle="k8s-pod-network.dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.337040 containerd[1476]: 2026-03-07 01:04:51.250 [INFO][3984] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:04:51.337040 containerd[1476]: 2026-03-07 01:04:51.250 [INFO][3984] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.194/26] IPv6=[] ContainerID="dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" HandleID="k8s-pod-network.dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:04:51.337040 containerd[1476]: 2026-03-07 01:04:51.267 [INFO][3917] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zlg5q" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"9d68eacf-53a9-41f5-a9a3-d1b563899713", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"", Pod:"goldmane-cccfbd5cf-zlg5q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia0623d6e5be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:04:51.337040 containerd[1476]: 2026-03-07 01:04:51.267 [INFO][3917] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.194/32] ContainerID="dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zlg5q" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:04:51.337040 containerd[1476]: 2026-03-07 01:04:51.267 [INFO][3917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0623d6e5be ContainerID="dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zlg5q" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:04:51.337040 containerd[1476]: 2026-03-07 01:04:51.279 [INFO][3917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zlg5q" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:04:51.337040 containerd[1476]: 2026-03-07 01:04:51.280 [INFO][3917] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zlg5q" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"9d68eacf-53a9-41f5-a9a3-d1b563899713", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797", Pod:"goldmane-cccfbd5cf-zlg5q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia0623d6e5be", MAC:"be:c4:fd:e4:6a:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:04:51.332604 systemd[1]: Started cri-containerd-38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b.scope - libcontainer container 38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b. Mar 7 01:04:51.343121 containerd[1476]: 2026-03-07 01:04:51.314 [INFO][3917] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zlg5q" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:04:51.343121 containerd[1476]: time="2026-03-07T01:04:51.340793547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-875ff5fdb-prpjv,Uid:5999fee7-9f2f-45af-8795-39c31e7a9b29,Namespace:calico-system,Attempt:0,}" Mar 7 01:04:51.364326 kubelet[2610]: I0307 01:04:51.363375 2610 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecf71cdc-e20e-4eea-a978-1c6b126bf599" path="/var/lib/kubelet/pods/ecf71cdc-e20e-4eea-a978-1c6b126bf599/volumes" Mar 7 01:04:51.427778 systemd-networkd[1361]: calib0cbf6d9b05: Link UP Mar 7 01:04:51.430029 systemd-networkd[1361]: calib0cbf6d9b05: Gained carrier Mar 7 01:04:51.463632 containerd[1476]: time="2026-03-07T01:04:51.454466637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:04:51.463632 containerd[1476]: time="2026-03-07T01:04:51.454539778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:04:51.463632 containerd[1476]: time="2026-03-07T01:04:51.454609353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:51.463632 containerd[1476]: time="2026-03-07T01:04:51.456501378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:50.519 [ERROR][3939] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:50.560 [INFO][3939] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0 coredns-66bc5c9577- kube-system 233af786-f839-4b49-bfb9-77d5d44842dc 930 0 2026-03-07 01:04:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521 coredns-66bc5c9577-r4h6f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib0cbf6d9b05 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" Namespace="kube-system" Pod="coredns-66bc5c9577-r4h6f" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-" Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:50.560 [INFO][3939] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" Namespace="kube-system" Pod="coredns-66bc5c9577-r4h6f" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:50.825 [INFO][4014] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" HandleID="k8s-pod-network.5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:50.872 [INFO][4014] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" HandleID="k8s-pod-network.5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003768a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", "pod":"coredns-66bc5c9577-r4h6f", "timestamp":"2026-03-07 01:04:50.825511313 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003db080)} Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:50.872 [INFO][4014] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:51.253 [INFO][4014] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:51.254 [INFO][4014] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521' Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:51.275 [INFO][4014] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:51.297 [INFO][4014] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:51.348 [INFO][4014] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:51.355 [INFO][4014] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:51.364 [INFO][4014] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:51.365 [INFO][4014] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:51.370 [INFO][4014] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76 Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:51.383 [INFO][4014] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.481503 containerd[1476]: 2026-03-07 01:04:51.411 [INFO][4014] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.195/26] block=192.168.26.192/26 handle="k8s-pod-network.5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.482797 containerd[1476]: 2026-03-07 01:04:51.411 [INFO][4014] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.195/26] handle="k8s-pod-network.5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.482797 containerd[1476]: 2026-03-07 01:04:51.411 [INFO][4014] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:04:51.482797 containerd[1476]: 2026-03-07 01:04:51.411 [INFO][4014] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.195/26] IPv6=[] ContainerID="5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" HandleID="k8s-pod-network.5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:04:51.482797 containerd[1476]: 2026-03-07 01:04:51.419 [INFO][3939] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" Namespace="kube-system" Pod="coredns-66bc5c9577-r4h6f" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"233af786-f839-4b49-bfb9-77d5d44842dc", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"", Pod:"coredns-66bc5c9577-r4h6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0cbf6d9b05", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:04:51.482797 containerd[1476]: 2026-03-07 01:04:51.419 [INFO][3939] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.195/32] ContainerID="5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" Namespace="kube-system" Pod="coredns-66bc5c9577-r4h6f" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:04:51.482797 containerd[1476]: 2026-03-07 01:04:51.419 [INFO][3939] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0cbf6d9b05 ContainerID="5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" Namespace="kube-system" Pod="coredns-66bc5c9577-r4h6f" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:04:51.482797 containerd[1476]: 2026-03-07 01:04:51.441 [INFO][3939] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" Namespace="kube-system" Pod="coredns-66bc5c9577-r4h6f" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:04:51.483578 containerd[1476]: 2026-03-07 01:04:51.445 [INFO][3939] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" Namespace="kube-system" Pod="coredns-66bc5c9577-r4h6f" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"233af786-f839-4b49-bfb9-77d5d44842dc", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76", Pod:"coredns-66bc5c9577-r4h6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0cbf6d9b05", MAC:"be:78:e9:fd:a8:66", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:04:51.483578 containerd[1476]: 2026-03-07 01:04:51.479 [INFO][3939] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76" Namespace="kube-system" Pod="coredns-66bc5c9577-r4h6f" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:04:51.538645 systemd[1]: Started cri-containerd-dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797.scope - libcontainer container dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797. Mar 7 01:04:51.577195 systemd-networkd[1361]: cali30dad147c7c: Link UP Mar 7 01:04:51.580127 systemd-networkd[1361]: cali30dad147c7c: Gained carrier Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:50.476 [ERROR][3928] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:50.552 [INFO][3928] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0 calico-apiserver-85c759574b- calico-system f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9 929 0 2026-03-07 01:04:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85c759574b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521 calico-apiserver-85c759574b-842dd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali30dad147c7c [] [] }} ContainerID="f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" Namespace="calico-system" Pod="calico-apiserver-85c759574b-842dd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-" Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:50.552 [INFO][3928] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" Namespace="calico-system" Pod="calico-apiserver-85c759574b-842dd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:50.823 [INFO][4006] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" HandleID="k8s-pod-network.f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:50.895 [INFO][4006] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" HandleID="k8s-pod-network.f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025fae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", "pod":"calico-apiserver-85c759574b-842dd", "timestamp":"2026-03-07 01:04:50.823907701 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002baf20)} Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:50.895 [INFO][4006] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:51.412 [INFO][4006] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:51.413 [INFO][4006] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521' Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:51.421 [INFO][4006] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:51.437 [INFO][4006] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:51.463 [INFO][4006] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:51.477 [INFO][4006] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:51.489 [INFO][4006] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:51.490 [INFO][4006] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:51.493 [INFO][4006] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:51.508 [INFO][4006] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.622993 containerd[1476]: 2026-03-07 01:04:51.531 [INFO][4006] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.196/26] block=192.168.26.192/26 handle="k8s-pod-network.f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.624397 containerd[1476]: 2026-03-07 01:04:51.531 [INFO][4006] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.196/26] handle="k8s-pod-network.f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.624397 containerd[1476]: 2026-03-07 01:04:51.531 [INFO][4006] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:04:51.624397 containerd[1476]: 2026-03-07 01:04:51.531 [INFO][4006] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.196/26] IPv6=[] ContainerID="f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" HandleID="k8s-pod-network.f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:04:51.624397 containerd[1476]: 2026-03-07 01:04:51.545 [INFO][3928] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" Namespace="calico-system" Pod="calico-apiserver-85c759574b-842dd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0", GenerateName:"calico-apiserver-85c759574b-", Namespace:"calico-system", SelfLink:"", UID:"f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c759574b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"", Pod:"calico-apiserver-85c759574b-842dd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali30dad147c7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:04:51.624397 containerd[1476]: 2026-03-07 01:04:51.546 [INFO][3928] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.196/32] ContainerID="f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" Namespace="calico-system" Pod="calico-apiserver-85c759574b-842dd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:04:51.624397 containerd[1476]: 2026-03-07 01:04:51.547 [INFO][3928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30dad147c7c ContainerID="f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" Namespace="calico-system" Pod="calico-apiserver-85c759574b-842dd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:04:51.624397 containerd[1476]: 2026-03-07 01:04:51.579 [INFO][3928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" Namespace="calico-system" Pod="calico-apiserver-85c759574b-842dd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:04:51.624840 containerd[1476]: 2026-03-07 01:04:51.588 [INFO][3928] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" Namespace="calico-system" Pod="calico-apiserver-85c759574b-842dd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0", GenerateName:"calico-apiserver-85c759574b-", Namespace:"calico-system", SelfLink:"", UID:"f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c759574b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d", Pod:"calico-apiserver-85c759574b-842dd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali30dad147c7c", MAC:"7e:e8:65:d7:d1:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:04:51.624840 containerd[1476]: 2026-03-07 01:04:51.609 [INFO][3928] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d" Namespace="calico-system" Pod="calico-apiserver-85c759574b-842dd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:04:51.627367 containerd[1476]: time="2026-03-07T01:04:51.622631874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:04:51.627367 containerd[1476]: time="2026-03-07T01:04:51.622715491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:04:51.627367 containerd[1476]: time="2026-03-07T01:04:51.622745573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:51.627367 containerd[1476]: time="2026-03-07T01:04:51.622887576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:51.732061 systemd[1]: Started cri-containerd-5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76.scope - libcontainer container 5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76. Mar 7 01:04:51.734058 systemd-networkd[1361]: cali1f6ce361693: Link UP Mar 7 01:04:51.751594 systemd-networkd[1361]: cali1f6ce361693: Gained carrier Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:50.649 [ERROR][3960] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:50.705 [INFO][3960] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0 csi-node-driver- calico-system fad7ec34-4cf5-4a59-a390-83631ed6b6c6 931 0 2026-03-07 01:04:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521 csi-node-driver-hhvgj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1f6ce361693 [] [] }} ContainerID="8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" Namespace="calico-system" Pod="csi-node-driver-hhvgj" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-" Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:50.711 [INFO][3960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" Namespace="calico-system" Pod="csi-node-driver-hhvgj" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:50.891 [INFO][4051] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" HandleID="k8s-pod-network.8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:50.934 [INFO][4051] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" HandleID="k8s-pod-network.8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00062e120), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", "pod":"csi-node-driver-hhvgj", "timestamp":"2026-03-07 01:04:50.891356438 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000189600)} Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:50.934 [INFO][4051] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:51.534 [INFO][4051] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:51.535 [INFO][4051] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521' Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:51.546 [INFO][4051] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:51.582 [INFO][4051] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:51.603 [INFO][4051] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:51.613 [INFO][4051] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:51.631 [INFO][4051] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:51.631 [INFO][4051] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:51.637 [INFO][4051] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89 Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:51.655 [INFO][4051] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.843495 containerd[1476]: 2026-03-07 01:04:51.674 [INFO][4051] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.197/26] block=192.168.26.192/26 handle="k8s-pod-network.8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.845665 containerd[1476]: 2026-03-07 01:04:51.675 [INFO][4051] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.197/26] handle="k8s-pod-network.8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.845665 containerd[1476]: 2026-03-07 01:04:51.676 [INFO][4051] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:04:51.845665 containerd[1476]: 2026-03-07 01:04:51.676 [INFO][4051] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.197/26] IPv6=[] ContainerID="8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" HandleID="k8s-pod-network.8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:04:51.845665 containerd[1476]: 2026-03-07 01:04:51.710 [INFO][3960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" Namespace="calico-system" Pod="csi-node-driver-hhvgj" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fad7ec34-4cf5-4a59-a390-83631ed6b6c6", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"", Pod:"csi-node-driver-hhvgj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f6ce361693", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:04:51.845665 containerd[1476]: 2026-03-07 01:04:51.710 [INFO][3960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.197/32] ContainerID="8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" Namespace="calico-system" Pod="csi-node-driver-hhvgj" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:04:51.845665 containerd[1476]: 2026-03-07 01:04:51.711 [INFO][3960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f6ce361693 ContainerID="8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" Namespace="calico-system" Pod="csi-node-driver-hhvgj" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:04:51.845665 containerd[1476]: 2026-03-07 01:04:51.759 [INFO][3960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" Namespace="calico-system" Pod="csi-node-driver-hhvgj" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:04:51.847868 containerd[1476]: 2026-03-07 01:04:51.775 [INFO][3960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" Namespace="calico-system" Pod="csi-node-driver-hhvgj" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fad7ec34-4cf5-4a59-a390-83631ed6b6c6", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89", Pod:"csi-node-driver-hhvgj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f6ce361693", MAC:"1e:40:9e:f0:40:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:04:51.847868 containerd[1476]: 2026-03-07 01:04:51.835 [INFO][3960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89" Namespace="calico-system" Pod="csi-node-driver-hhvgj" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:04:51.848882 systemd-networkd[1361]: cali6f6052bd86f: Link UP Mar 7 01:04:51.853313 systemd-networkd[1361]: cali6f6052bd86f: Gained carrier Mar 7 01:04:51.867920 containerd[1476]: time="2026-03-07T01:04:51.861751559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:04:51.867920 containerd[1476]: time="2026-03-07T01:04:51.861873710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:04:51.867920 containerd[1476]: time="2026-03-07T01:04:51.861902461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:51.867920 containerd[1476]: time="2026-03-07T01:04:51.862046033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:50.662 [ERROR][3985] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:50.758 [INFO][3985] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0 coredns-66bc5c9577- kube-system 680dd4ad-45eb-49a1-b4c7-db4a6b1269ec 933 0 2026-03-07 01:04:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521 coredns-66bc5c9577-v5hrm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6f6052bd86f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" Namespace="kube-system" Pod="coredns-66bc5c9577-v5hrm" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-" Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:50.758 [INFO][3985] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" Namespace="kube-system" Pod="coredns-66bc5c9577-v5hrm" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:50.911 [INFO][4063] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" HandleID="k8s-pod-network.41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:50.950 [INFO][4063] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" HandleID="k8s-pod-network.41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005ee2a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", "pod":"coredns-66bc5c9577-v5hrm", "timestamp":"2026-03-07 01:04:50.911896645 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004f3760)} Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:50.950 [INFO][4063] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:51.677 [INFO][4063] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:51.684 [INFO][4063] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521' Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:51.698 [INFO][4063] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:51.719 [INFO][4063] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:51.768 [INFO][4063] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:51.776 [INFO][4063] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:51.783 [INFO][4063] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:51.783 [INFO][4063] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:51.787 [INFO][4063] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:51.800 [INFO][4063] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.919223 containerd[1476]: 2026-03-07 01:04:51.817 [INFO][4063] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.198/26] block=192.168.26.192/26 handle="k8s-pod-network.41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.920325 containerd[1476]: 2026-03-07 01:04:51.817 [INFO][4063] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.198/26] handle="k8s-pod-network.41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:51.920325 containerd[1476]: 2026-03-07 01:04:51.817 [INFO][4063] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:04:51.920325 containerd[1476]: 2026-03-07 01:04:51.817 [INFO][4063] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.198/26] IPv6=[] ContainerID="41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" HandleID="k8s-pod-network.41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:04:51.920325 containerd[1476]: 2026-03-07 01:04:51.827 [INFO][3985] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" Namespace="kube-system" Pod="coredns-66bc5c9577-v5hrm" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"680dd4ad-45eb-49a1-b4c7-db4a6b1269ec", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"", Pod:"coredns-66bc5c9577-v5hrm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f6052bd86f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:04:51.920325 containerd[1476]: 2026-03-07 01:04:51.830 [INFO][3985] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.198/32] ContainerID="41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" Namespace="kube-system" Pod="coredns-66bc5c9577-v5hrm" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:04:51.920325 containerd[1476]: 2026-03-07 01:04:51.830 [INFO][3985] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f6052bd86f ContainerID="41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" Namespace="kube-system" Pod="coredns-66bc5c9577-v5hrm" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:04:51.920325 containerd[1476]: 2026-03-07 01:04:51.867 [INFO][3985] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" Namespace="kube-system" Pod="coredns-66bc5c9577-v5hrm" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:04:51.920760 containerd[1476]: 2026-03-07 01:04:51.868 [INFO][3985] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" Namespace="kube-system" Pod="coredns-66bc5c9577-v5hrm" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"680dd4ad-45eb-49a1-b4c7-db4a6b1269ec", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a", Pod:"coredns-66bc5c9577-v5hrm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f6052bd86f", MAC:"2a:ab:63:4d:95:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:04:51.920760 containerd[1476]: 2026-03-07 01:04:51.909 [INFO][3985] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a" Namespace="kube-system" Pod="coredns-66bc5c9577-v5hrm" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:04:51.968624 systemd[1]: Started cri-containerd-f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d.scope - libcontainer container f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d. Mar 7 01:04:52.015908 systemd-networkd[1361]: calic2f106b5ad6: Link UP Mar 7 01:04:52.031674 systemd-networkd[1361]: calic2f106b5ad6: Gained carrier Mar 7 01:04:52.036733 containerd[1476]: time="2026-03-07T01:04:52.036685031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c759574b-shlqv,Uid:286f9f87-acb9-4bde-81b8-c11f70245864,Namespace:calico-system,Attempt:1,} returns sandbox id \"38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b\"" Mar 7 01:04:52.040355 containerd[1476]: time="2026-03-07T01:04:52.040169406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:04:52.057080 containerd[1476]: time="2026-03-07T01:04:52.056700259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:04:52.057080 containerd[1476]: time="2026-03-07T01:04:52.056788740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:04:52.057080 containerd[1476]: time="2026-03-07T01:04:52.056817401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:52.057080 containerd[1476]: time="2026-03-07T01:04:52.056937938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:52.062175 containerd[1476]: time="2026-03-07T01:04:52.061659754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-r4h6f,Uid:233af786-f839-4b49-bfb9-77d5d44842dc,Namespace:kube-system,Attempt:1,} returns sandbox id \"5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76\"" Mar 7 01:04:52.085113 containerd[1476]: time="2026-03-07T01:04:52.084876150Z" level=info msg="CreateContainer within sandbox \"5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.467 [ERROR][4141] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.514 [INFO][4141] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--875ff5fdb--prpjv-eth0 whisker-875ff5fdb- calico-system 5999fee7-9f2f-45af-8795-39c31e7a9b29 949 0 2026-03-07 01:04:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:875ff5fdb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521 whisker-875ff5fdb-prpjv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic2f106b5ad6 [] [] }} ContainerID="96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" Namespace="calico-system" Pod="whisker-875ff5fdb-prpjv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--875ff5fdb--prpjv-" Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.514 [INFO][4141] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" Namespace="calico-system" Pod="whisker-875ff5fdb-prpjv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--875ff5fdb--prpjv-eth0" Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.781 [INFO][4199] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" HandleID="k8s-pod-network.96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--875ff5fdb--prpjv-eth0" Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.819 [INFO][4199] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" HandleID="k8s-pod-network.96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--875ff5fdb--prpjv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f130), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", "pod":"whisker-875ff5fdb-prpjv", "timestamp":"2026-03-07 01:04:51.781556795 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000478580)} Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.819 [INFO][4199] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.819 [INFO][4199] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.819 [INFO][4199] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521' Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.832 [INFO][4199] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.850 [INFO][4199] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.873 [INFO][4199] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.884 [INFO][4199] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.895 [INFO][4199] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.896 [INFO][4199] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.921 [INFO][4199] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4 Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.936 [INFO][4199] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.967 [INFO][4199] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.199/26] block=192.168.26.192/26 handle="k8s-pod-network.96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:52.099880 containerd[1476]: 2026-03-07 01:04:51.967 [INFO][4199] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.199/26] handle="k8s-pod-network.96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:04:52.101140 containerd[1476]: 2026-03-07 01:04:51.967 [INFO][4199] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:04:52.101140 containerd[1476]: 2026-03-07 01:04:51.967 [INFO][4199] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.199/26] IPv6=[] ContainerID="96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" HandleID="k8s-pod-network.96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--875ff5fdb--prpjv-eth0" Mar 7 01:04:52.101140 containerd[1476]: 2026-03-07 01:04:52.001 [INFO][4141] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" Namespace="calico-system" Pod="whisker-875ff5fdb-prpjv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--875ff5fdb--prpjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--875ff5fdb--prpjv-eth0", GenerateName:"whisker-875ff5fdb-", Namespace:"calico-system", SelfLink:"", UID:"5999fee7-9f2f-45af-8795-39c31e7a9b29", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"875ff5fdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"", Pod:"whisker-875ff5fdb-prpjv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic2f106b5ad6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:04:52.101140 containerd[1476]: 2026-03-07 01:04:52.003 [INFO][4141] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.199/32] ContainerID="96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" Namespace="calico-system" Pod="whisker-875ff5fdb-prpjv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--875ff5fdb--prpjv-eth0" Mar 7 01:04:52.101140 containerd[1476]: 2026-03-07 01:04:52.003 [INFO][4141] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2f106b5ad6 ContainerID="96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" Namespace="calico-system" Pod="whisker-875ff5fdb-prpjv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--875ff5fdb--prpjv-eth0" Mar 7 01:04:52.101140 containerd[1476]: 2026-03-07 01:04:52.033 [INFO][4141] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" Namespace="calico-system" Pod="whisker-875ff5fdb-prpjv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--875ff5fdb--prpjv-eth0" Mar 7 01:04:52.101140 containerd[1476]: 2026-03-07 01:04:52.038 [INFO][4141] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" Namespace="calico-system" Pod="whisker-875ff5fdb-prpjv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--875ff5fdb--prpjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--875ff5fdb--prpjv-eth0", GenerateName:"whisker-875ff5fdb-", Namespace:"calico-system", SelfLink:"", UID:"5999fee7-9f2f-45af-8795-39c31e7a9b29", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"875ff5fdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4", Pod:"whisker-875ff5fdb-prpjv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic2f106b5ad6", MAC:"16:43:86:20:1b:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:04:52.102556 containerd[1476]: 2026-03-07 01:04:52.075 [INFO][4141] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4" Namespace="calico-system" Pod="whisker-875ff5fdb-prpjv" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--875ff5fdb--prpjv-eth0" Mar 7 01:04:52.121589 containerd[1476]: time="2026-03-07T01:04:52.121485156Z" level=info msg="CreateContainer within sandbox \"5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb4efac9ddff3e0b01c5367fb4f7eb0954a4130d0808fe0e712d280ab66eed7f\"" Mar 7 01:04:52.125714 containerd[1476]: time="2026-03-07T01:04:52.125218935Z" level=info msg="StartContainer for \"bb4efac9ddff3e0b01c5367fb4f7eb0954a4130d0808fe0e712d280ab66eed7f\"" Mar 7 01:04:52.220591 systemd[1]: Started cri-containerd-8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89.scope - libcontainer container 8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89. Mar 7 01:04:52.225259 containerd[1476]: time="2026-03-07T01:04:52.224937742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:04:52.225259 containerd[1476]: time="2026-03-07T01:04:52.225070211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:04:52.225259 containerd[1476]: time="2026-03-07T01:04:52.225100898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:52.226603 containerd[1476]: time="2026-03-07T01:04:52.225225378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:52.246447 containerd[1476]: time="2026-03-07T01:04:52.246394652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-zlg5q,Uid:9d68eacf-53a9-41f5-a9a3-d1b563899713,Namespace:calico-system,Attempt:1,} returns sandbox id \"dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797\"" Mar 7 01:04:52.310556 systemd[1]: Started cri-containerd-41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a.scope - libcontainer container 41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a. Mar 7 01:04:52.335376 containerd[1476]: time="2026-03-07T01:04:52.324591225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:04:52.335376 containerd[1476]: time="2026-03-07T01:04:52.325033094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:04:52.335376 containerd[1476]: time="2026-03-07T01:04:52.325062991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:52.335376 containerd[1476]: time="2026-03-07T01:04:52.325214538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:52.386573 systemd[1]: Started cri-containerd-bb4efac9ddff3e0b01c5367fb4f7eb0954a4130d0808fe0e712d280ab66eed7f.scope - libcontainer container bb4efac9ddff3e0b01c5367fb4f7eb0954a4130d0808fe0e712d280ab66eed7f. Mar 7 01:04:52.472620 systemd[1]: Started cri-containerd-96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4.scope - libcontainer container 96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4. Mar 7 01:04:52.477408 containerd[1476]: time="2026-03-07T01:04:52.475296754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c759574b-842dd,Uid:f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9,Namespace:calico-system,Attempt:1,} returns sandbox id \"f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d\"" Mar 7 01:04:52.492455 containerd[1476]: time="2026-03-07T01:04:52.491613182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhvgj,Uid:fad7ec34-4cf5-4a59-a390-83631ed6b6c6,Namespace:calico-system,Attempt:1,} returns sandbox id \"8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89\"" Mar 7 01:04:52.514368 containerd[1476]: time="2026-03-07T01:04:52.513659935Z" level=info msg="StartContainer for \"bb4efac9ddff3e0b01c5367fb4f7eb0954a4130d0808fe0e712d280ab66eed7f\" returns successfully" Mar 7 01:04:52.528615 containerd[1476]: time="2026-03-07T01:04:52.527706847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v5hrm,Uid:680dd4ad-45eb-49a1-b4c7-db4a6b1269ec,Namespace:kube-system,Attempt:1,} returns sandbox id \"41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a\"" Mar 7 01:04:52.544038 containerd[1476]: time="2026-03-07T01:04:52.543986307Z" level=info msg="CreateContainer within sandbox \"41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:04:52.565750 containerd[1476]: time="2026-03-07T01:04:52.565609614Z" level=info msg="CreateContainer within sandbox \"41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9465a918bcf31f6b8985afdc95026fa0aca43d7525f7945083d903c85000221\"" Mar 7 01:04:52.567814 containerd[1476]: time="2026-03-07T01:04:52.567580481Z" level=info msg="StartContainer for \"f9465a918bcf31f6b8985afdc95026fa0aca43d7525f7945083d903c85000221\"" Mar 7 01:04:52.638665 systemd[1]: Started cri-containerd-f9465a918bcf31f6b8985afdc95026fa0aca43d7525f7945083d903c85000221.scope - libcontainer container f9465a918bcf31f6b8985afdc95026fa0aca43d7525f7945083d903c85000221. Mar 7 01:04:52.697829 containerd[1476]: time="2026-03-07T01:04:52.697681346Z" level=info msg="StartContainer for \"f9465a918bcf31f6b8985afdc95026fa0aca43d7525f7945083d903c85000221\" returns successfully" Mar 7 01:04:52.818834 kubelet[2610]: I0307 01:04:52.818043 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-v5hrm" podStartSLOduration=38.81801791 podStartE2EDuration="38.81801791s" podCreationTimestamp="2026-03-07 01:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:04:52.789085403 +0000 UTC m=+45.654650380" watchObservedRunningTime="2026-03-07 01:04:52.81801791 +0000 UTC m=+45.683582881" Mar 7 01:04:52.821054 kubelet[2610]: I0307 01:04:52.820469 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-r4h6f" podStartSLOduration=38.820423994 podStartE2EDuration="38.820423994s" podCreationTimestamp="2026-03-07 01:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:04:52.817553041 +0000 UTC m=+45.683118014" watchObservedRunningTime="2026-03-07 01:04:52.820423994 +0000 UTC m=+45.685988971" Mar 7 01:04:52.910554 containerd[1476]: time="2026-03-07T01:04:52.909441738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-875ff5fdb-prpjv,Uid:5999fee7-9f2f-45af-8795-39c31e7a9b29,Namespace:calico-system,Attempt:0,} returns sandbox id \"96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4\"" Mar 7 01:04:52.954716 systemd-networkd[1361]: cali6f6052bd86f: Gained IPv6LL Mar 7 01:04:52.955503 systemd-networkd[1361]: calif14ec3435a4: Gained IPv6LL Mar 7 01:04:53.020429 systemd-networkd[1361]: cali30dad147c7c: Gained IPv6LL Mar 7 01:04:53.082514 systemd-networkd[1361]: calib0cbf6d9b05: Gained IPv6LL Mar 7 01:04:53.212700 systemd-networkd[1361]: calia0623d6e5be: Gained IPv6LL Mar 7 01:04:53.258381 kernel: calico-node[4379]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:04:53.787514 systemd-networkd[1361]: cali1f6ce361693: Gained IPv6LL Mar 7 01:04:53.914512 systemd-networkd[1361]: calic2f106b5ad6: Gained IPv6LL Mar 7 01:04:54.297727 kubelet[2610]: I0307 01:04:54.296721 2610 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:04:54.312518 systemd-networkd[1361]: vxlan.calico: Link UP Mar 7 01:04:54.312530 systemd-networkd[1361]: vxlan.calico: Gained carrier Mar 7 01:04:54.369473 systemd[1]: run-containerd-runc-k8s.io-7a0e657561bf79a421389b341ee1682afcf4a8b3a71f56e7e646139ce9a9144c-runc.Vmf04c.mount: Deactivated successfully. Mar 7 01:04:55.717731 containerd[1476]: time="2026-03-07T01:04:55.717660877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:55.719277 containerd[1476]: time="2026-03-07T01:04:55.719199733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:04:55.721045 containerd[1476]: time="2026-03-07T01:04:55.720956735Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:55.729896 containerd[1476]: time="2026-03-07T01:04:55.729184785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:55.733271 containerd[1476]: time="2026-03-07T01:04:55.733210199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.692995529s" Mar 7 01:04:55.733271 containerd[1476]: time="2026-03-07T01:04:55.733257815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:04:55.736535 containerd[1476]: time="2026-03-07T01:04:55.735578532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:04:55.739008 containerd[1476]: time="2026-03-07T01:04:55.738958257Z" level=info msg="CreateContainer within sandbox \"38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:04:55.759803 containerd[1476]: time="2026-03-07T01:04:55.759753267Z" level=info msg="CreateContainer within sandbox \"38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"46c7df1fb2a2cd708e92cea02871039feb1148a8efe8b11895ad2ecc016d47d6\"" Mar 7 01:04:55.762412 containerd[1476]: time="2026-03-07T01:04:55.761153935Z" level=info msg="StartContainer for \"46c7df1fb2a2cd708e92cea02871039feb1148a8efe8b11895ad2ecc016d47d6\"" Mar 7 01:04:55.822609 systemd[1]: Started cri-containerd-46c7df1fb2a2cd708e92cea02871039feb1148a8efe8b11895ad2ecc016d47d6.scope - libcontainer container 46c7df1fb2a2cd708e92cea02871039feb1148a8efe8b11895ad2ecc016d47d6. Mar 7 01:04:55.834522 systemd-networkd[1361]: vxlan.calico: Gained IPv6LL Mar 7 01:04:55.894369 containerd[1476]: time="2026-03-07T01:04:55.894301028Z" level=info msg="StartContainer for \"46c7df1fb2a2cd708e92cea02871039feb1148a8efe8b11895ad2ecc016d47d6\" returns successfully" Mar 7 01:04:57.828361 kubelet[2610]: I0307 01:04:57.826590 2610 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:04:57.964272 ntpd[1436]: Listen normally on 8 vxlan.calico 192.168.26.192:123 Mar 7 01:04:57.965646 ntpd[1436]: 7 Mar 01:04:57 ntpd[1436]: Listen normally on 8 vxlan.calico 192.168.26.192:123 Mar 7 01:04:57.965646 ntpd[1436]: 7 Mar 01:04:57 ntpd[1436]: Listen normally on 9 calif14ec3435a4 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 7 01:04:57.965646 ntpd[1436]: 7 Mar 01:04:57 ntpd[1436]: Listen normally on 10 calia0623d6e5be [fe80::ecee:eeff:feee:eeee%5]:123 Mar 7 01:04:57.965646 ntpd[1436]: 7 Mar 01:04:57 ntpd[1436]: Listen normally on 11 calib0cbf6d9b05 [fe80::ecee:eeff:feee:eeee%6]:123 Mar 7 01:04:57.965646 ntpd[1436]: 7 Mar 01:04:57 ntpd[1436]: Listen normally on 12 cali30dad147c7c [fe80::ecee:eeff:feee:eeee%7]:123 Mar 7 01:04:57.965646 ntpd[1436]: 7 Mar 01:04:57 ntpd[1436]: Listen normally on 13 cali1f6ce361693 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 7 01:04:57.965646 ntpd[1436]: 7 Mar 01:04:57 ntpd[1436]: Listen normally on 14 cali6f6052bd86f [fe80::ecee:eeff:feee:eeee%9]:123 Mar 7 01:04:57.965646 ntpd[1436]: 7 Mar 01:04:57 ntpd[1436]: Listen normally on 15 calic2f106b5ad6 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 7 01:04:57.965646 ntpd[1436]: 7 Mar 01:04:57 ntpd[1436]: Listen normally on 16 vxlan.calico [fe80::64d6:8aff:fe95:4bdd%11]:123 Mar 7 01:04:57.964461 ntpd[1436]: Listen normally on 9 calif14ec3435a4 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 7 01:04:57.964547 ntpd[1436]: Listen normally on 10 calia0623d6e5be [fe80::ecee:eeff:feee:eeee%5]:123 Mar 7 01:04:57.964605 ntpd[1436]: Listen normally on 11 calib0cbf6d9b05 [fe80::ecee:eeff:feee:eeee%6]:123 Mar 7 01:04:57.964670 ntpd[1436]: Listen normally on 12 cali30dad147c7c [fe80::ecee:eeff:feee:eeee%7]:123 Mar 7 01:04:57.964729 ntpd[1436]: Listen normally on 13 cali1f6ce361693 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 7 01:04:57.964785 ntpd[1436]: Listen normally on 14 cali6f6052bd86f [fe80::ecee:eeff:feee:eeee%9]:123 Mar 7 01:04:57.964849 ntpd[1436]: Listen normally on 15 calic2f106b5ad6 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 7 01:04:57.964906 ntpd[1436]: Listen normally on 16 vxlan.calico [fe80::64d6:8aff:fe95:4bdd%11]:123 Mar 7 01:04:58.263130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3040326691.mount: Deactivated successfully. Mar 7 01:04:58.491213 kubelet[2610]: I0307 01:04:58.491075 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-85c759574b-shlqv" podStartSLOduration=27.796220549 podStartE2EDuration="31.491048163s" podCreationTimestamp="2026-03-07 01:04:27 +0000 UTC" firstStartedPulling="2026-03-07 01:04:52.039481296 +0000 UTC m=+44.905046259" lastFinishedPulling="2026-03-07 01:04:55.734308905 +0000 UTC m=+48.599873873" observedRunningTime="2026-03-07 01:04:56.846322623 +0000 UTC m=+49.711887598" watchObservedRunningTime="2026-03-07 01:04:58.491048163 +0000 UTC m=+51.356613150" Mar 7 01:04:59.089083 containerd[1476]: time="2026-03-07T01:04:59.089013409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:59.090477 containerd[1476]: time="2026-03-07T01:04:59.090277282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:04:59.091784 containerd[1476]: time="2026-03-07T01:04:59.091716792Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:59.096362 containerd[1476]: time="2026-03-07T01:04:59.095032192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:59.096362 containerd[1476]: time="2026-03-07T01:04:59.096215042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.36058665s" Mar 7 01:04:59.096362 containerd[1476]: time="2026-03-07T01:04:59.096258217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:04:59.099323 containerd[1476]: time="2026-03-07T01:04:59.099228169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:04:59.103253 containerd[1476]: time="2026-03-07T01:04:59.103198414Z" level=info msg="CreateContainer within sandbox \"dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:04:59.121974 containerd[1476]: time="2026-03-07T01:04:59.121917994Z" level=info msg="CreateContainer within sandbox \"dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2f7951d37f9b315c87bc367f9215badec009ccf9b31ff6e68f1373229670c671\"" Mar 7 01:04:59.122951 containerd[1476]: time="2026-03-07T01:04:59.122917516Z" level=info msg="StartContainer for \"2f7951d37f9b315c87bc367f9215badec009ccf9b31ff6e68f1373229670c671\"" Mar 7 01:04:59.181572 systemd[1]: Started cri-containerd-2f7951d37f9b315c87bc367f9215badec009ccf9b31ff6e68f1373229670c671.scope - libcontainer container 2f7951d37f9b315c87bc367f9215badec009ccf9b31ff6e68f1373229670c671. Mar 7 01:04:59.242598 containerd[1476]: time="2026-03-07T01:04:59.242284304Z" level=info msg="StartContainer for \"2f7951d37f9b315c87bc367f9215badec009ccf9b31ff6e68f1373229670c671\" returns successfully" Mar 7 01:04:59.298916 containerd[1476]: time="2026-03-07T01:04:59.298853994Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:59.299965 containerd[1476]: time="2026-03-07T01:04:59.299897875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:04:59.303087 containerd[1476]: time="2026-03-07T01:04:59.303037696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 203.492942ms" Mar 7 01:04:59.303087 containerd[1476]: time="2026-03-07T01:04:59.303084176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:04:59.305389 containerd[1476]: time="2026-03-07T01:04:59.305140948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:04:59.308424 containerd[1476]: time="2026-03-07T01:04:59.308394512Z" level=info msg="CreateContainer within sandbox \"f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:04:59.338168 containerd[1476]: time="2026-03-07T01:04:59.338107266Z" level=info msg="CreateContainer within sandbox \"f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"98ac9fe1485508207817295365ce47db9583f7576e2d22866fd957cfa6c1a114\"" Mar 7 01:04:59.340786 containerd[1476]: time="2026-03-07T01:04:59.339850064Z" level=info msg="StartContainer for \"98ac9fe1485508207817295365ce47db9583f7576e2d22866fd957cfa6c1a114\"" Mar 7 01:04:59.406850 systemd[1]: Started cri-containerd-98ac9fe1485508207817295365ce47db9583f7576e2d22866fd957cfa6c1a114.scope - libcontainer container 98ac9fe1485508207817295365ce47db9583f7576e2d22866fd957cfa6c1a114. Mar 7 01:04:59.466776 containerd[1476]: time="2026-03-07T01:04:59.466722423Z" level=info msg="StartContainer for \"98ac9fe1485508207817295365ce47db9583f7576e2d22866fd957cfa6c1a114\" returns successfully" Mar 7 01:04:59.876712 kubelet[2610]: I0307 01:04:59.875878 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-zlg5q" podStartSLOduration=26.027642858 podStartE2EDuration="32.875852717s" podCreationTimestamp="2026-03-07 01:04:27 +0000 UTC" firstStartedPulling="2026-03-07 01:04:52.250766866 +0000 UTC m=+45.116331829" lastFinishedPulling="2026-03-07 01:04:59.098976724 +0000 UTC m=+51.964541688" observedRunningTime="2026-03-07 01:04:59.865802736 +0000 UTC m=+52.731367711" watchObservedRunningTime="2026-03-07 01:04:59.875852717 +0000 UTC m=+52.741417693" Mar 7 01:04:59.913737 kubelet[2610]: I0307 01:04:59.912916 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-85c759574b-842dd" podStartSLOduration=26.099723479 podStartE2EDuration="32.91288992s" podCreationTimestamp="2026-03-07 01:04:27 +0000 UTC" firstStartedPulling="2026-03-07 01:04:52.490942584 +0000 UTC m=+45.356507543" lastFinishedPulling="2026-03-07 01:04:59.304109034 +0000 UTC m=+52.169673984" observedRunningTime="2026-03-07 01:04:59.912151323 +0000 UTC m=+52.777716298" watchObservedRunningTime="2026-03-07 01:04:59.91288992 +0000 UTC m=+52.778454894" Mar 7 01:05:00.639913 containerd[1476]: time="2026-03-07T01:05:00.639850469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:00.641915 containerd[1476]: time="2026-03-07T01:05:00.641851135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:05:00.644412 containerd[1476]: time="2026-03-07T01:05:00.643550090Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:00.648497 containerd[1476]: time="2026-03-07T01:05:00.648444745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:00.649428 containerd[1476]: time="2026-03-07T01:05:00.649384385Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.344204032s" Mar 7 01:05:00.649555 containerd[1476]: time="2026-03-07T01:05:00.649441176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:05:00.652697 containerd[1476]: time="2026-03-07T01:05:00.652650076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:05:00.657384 containerd[1476]: time="2026-03-07T01:05:00.657237355Z" level=info msg="CreateContainer within sandbox \"8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:05:00.681706 containerd[1476]: time="2026-03-07T01:05:00.681540420Z" level=info msg="CreateContainer within sandbox \"8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ce832bd7008d638355d2430be11bf25d9afeb9f6701827bd79ce4f2754021d79\"" Mar 7 01:05:00.685445 containerd[1476]: time="2026-03-07T01:05:00.683470039Z" level=info msg="StartContainer for \"ce832bd7008d638355d2430be11bf25d9afeb9f6701827bd79ce4f2754021d79\"" Mar 7 01:05:00.755565 systemd[1]: Started cri-containerd-ce832bd7008d638355d2430be11bf25d9afeb9f6701827bd79ce4f2754021d79.scope - libcontainer container ce832bd7008d638355d2430be11bf25d9afeb9f6701827bd79ce4f2754021d79. Mar 7 01:05:00.808739 containerd[1476]: time="2026-03-07T01:05:00.808672484Z" level=info msg="StartContainer for \"ce832bd7008d638355d2430be11bf25d9afeb9f6701827bd79ce4f2754021d79\" returns successfully" Mar 7 01:05:00.850367 kubelet[2610]: I0307 01:05:00.850310 2610 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:05:01.327957 systemd[1]: run-containerd-runc-k8s.io-2f7951d37f9b315c87bc367f9215badec009ccf9b31ff6e68f1373229670c671-runc.7vFW6F.mount: Deactivated successfully. Mar 7 01:05:01.862189 containerd[1476]: time="2026-03-07T01:05:01.862128188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:01.863585 containerd[1476]: time="2026-03-07T01:05:01.863513063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:05:01.865092 containerd[1476]: time="2026-03-07T01:05:01.865024872Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:01.868287 containerd[1476]: time="2026-03-07T01:05:01.868251710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:01.869424 containerd[1476]: time="2026-03-07T01:05:01.869234316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.216534736s" Mar 7 01:05:01.869424 containerd[1476]: time="2026-03-07T01:05:01.869279655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:05:01.871433 containerd[1476]: time="2026-03-07T01:05:01.871231831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:05:01.875476 containerd[1476]: time="2026-03-07T01:05:01.875437029Z" level=info msg="CreateContainer within sandbox \"96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:05:01.896521 containerd[1476]: time="2026-03-07T01:05:01.896462035Z" level=info msg="CreateContainer within sandbox \"96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"bf7e49f7286cba717b3a6f957f71e7248cab8ca931d397dd0c63c890a124fc90\"" Mar 7 01:05:01.897402 containerd[1476]: time="2026-03-07T01:05:01.897364413Z" level=info msg="StartContainer for \"bf7e49f7286cba717b3a6f957f71e7248cab8ca931d397dd0c63c890a124fc90\"" Mar 7 01:05:01.989721 systemd[1]: Started cri-containerd-bf7e49f7286cba717b3a6f957f71e7248cab8ca931d397dd0c63c890a124fc90.scope - libcontainer container bf7e49f7286cba717b3a6f957f71e7248cab8ca931d397dd0c63c890a124fc90. Mar 7 01:05:02.088872 containerd[1476]: time="2026-03-07T01:05:02.088817704Z" level=info msg="StartContainer for \"bf7e49f7286cba717b3a6f957f71e7248cab8ca931d397dd0c63c890a124fc90\" returns successfully" Mar 7 01:05:03.322843 containerd[1476]: time="2026-03-07T01:05:03.322771771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:03.324356 containerd[1476]: time="2026-03-07T01:05:03.324265876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:05:03.325822 containerd[1476]: time="2026-03-07T01:05:03.325748953Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:03.328934 containerd[1476]: time="2026-03-07T01:05:03.328880811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:03.330432 containerd[1476]: time="2026-03-07T01:05:03.329992161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.458717301s" Mar 7 01:05:03.330432 containerd[1476]: time="2026-03-07T01:05:03.330044375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:05:03.333862 containerd[1476]: time="2026-03-07T01:05:03.332770970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:05:03.336186 containerd[1476]: time="2026-03-07T01:05:03.336132538Z" level=info msg="CreateContainer within sandbox \"8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:05:03.359664 containerd[1476]: time="2026-03-07T01:05:03.359546721Z" level=info msg="CreateContainer within sandbox \"8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d6582cd3200dda112abf90db4048e2f674277dbde361fd9e6448b3871cd7f68a\"" Mar 7 01:05:03.363440 containerd[1476]: time="2026-03-07T01:05:03.361548403Z" level=info msg="StartContainer for \"d6582cd3200dda112abf90db4048e2f674277dbde361fd9e6448b3871cd7f68a\"" Mar 7 01:05:03.425651 systemd[1]: Started cri-containerd-d6582cd3200dda112abf90db4048e2f674277dbde361fd9e6448b3871cd7f68a.scope - libcontainer container d6582cd3200dda112abf90db4048e2f674277dbde361fd9e6448b3871cd7f68a. Mar 7 01:05:03.467371 containerd[1476]: time="2026-03-07T01:05:03.466428078Z" level=info msg="StartContainer for \"d6582cd3200dda112abf90db4048e2f674277dbde361fd9e6448b3871cd7f68a\" returns successfully" Mar 7 01:05:03.883623 kubelet[2610]: I0307 01:05:03.883024 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hhvgj" podStartSLOduration=25.063143794 podStartE2EDuration="35.882998719s" podCreationTimestamp="2026-03-07 01:04:28 +0000 UTC" firstStartedPulling="2026-03-07 01:04:52.511998184 +0000 UTC m=+45.377563132" lastFinishedPulling="2026-03-07 01:05:03.331853093 +0000 UTC m=+56.197418057" observedRunningTime="2026-03-07 01:05:03.882182668 +0000 UTC m=+56.747747641" watchObservedRunningTime="2026-03-07 01:05:03.882998719 +0000 UTC m=+56.748563693" Mar 7 01:05:04.356410 containerd[1476]: time="2026-03-07T01:05:04.355450702Z" level=info msg="StopPodSandbox for \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\"" Mar 7 01:05:04.463646 kubelet[2610]: I0307 01:05:04.462929 2610 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:05:04.463646 kubelet[2610]: I0307 01:05:04.463011 2610 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:05:04.591230 containerd[1476]: 2026-03-07 01:05:04.474 [INFO][5065] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Mar 7 01:05:04.591230 containerd[1476]: 2026-03-07 01:05:04.477 [INFO][5065] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" iface="eth0" netns="/var/run/netns/cni-1105d837-a195-b5fc-d53b-bf9d5e5d9484" Mar 7 01:05:04.591230 containerd[1476]: 2026-03-07 01:05:04.478 [INFO][5065] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" iface="eth0" netns="/var/run/netns/cni-1105d837-a195-b5fc-d53b-bf9d5e5d9484" Mar 7 01:05:04.591230 containerd[1476]: 2026-03-07 01:05:04.484 [INFO][5065] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" iface="eth0" netns="/var/run/netns/cni-1105d837-a195-b5fc-d53b-bf9d5e5d9484" Mar 7 01:05:04.591230 containerd[1476]: 2026-03-07 01:05:04.484 [INFO][5065] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Mar 7 01:05:04.591230 containerd[1476]: 2026-03-07 01:05:04.484 [INFO][5065] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Mar 7 01:05:04.591230 containerd[1476]: 2026-03-07 01:05:04.565 [INFO][5072] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" HandleID="k8s-pod-network.7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:04.591230 containerd[1476]: 2026-03-07 01:05:04.565 [INFO][5072] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:04.591230 containerd[1476]: 2026-03-07 01:05:04.566 [INFO][5072] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:04.591230 containerd[1476]: 2026-03-07 01:05:04.581 [WARNING][5072] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" HandleID="k8s-pod-network.7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:04.591230 containerd[1476]: 2026-03-07 01:05:04.581 [INFO][5072] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" HandleID="k8s-pod-network.7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:04.591230 containerd[1476]: 2026-03-07 01:05:04.585 [INFO][5072] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:04.591230 containerd[1476]: 2026-03-07 01:05:04.588 [INFO][5065] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Mar 7 01:05:04.592049 containerd[1476]: time="2026-03-07T01:05:04.591613124Z" level=info msg="TearDown network for sandbox \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\" successfully" Mar 7 01:05:04.592049 containerd[1476]: time="2026-03-07T01:05:04.591832217Z" level=info msg="StopPodSandbox for \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\" returns successfully" Mar 7 01:05:04.601140 containerd[1476]: time="2026-03-07T01:05:04.600760998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-594ffc4984-hbs6d,Uid:0dcea30e-abc3-43fe-b161-0a975c6561d9,Namespace:calico-system,Attempt:1,}" Mar 7 01:05:04.602548 systemd[1]: run-netns-cni\x2d1105d837\x2da195\x2db5fc\x2dd53b\x2dbf9d5e5d9484.mount: Deactivated successfully. Mar 7 01:05:04.879639 systemd-networkd[1361]: cali045e502c61e: Link UP Mar 7 01:05:04.885506 systemd-networkd[1361]: cali045e502c61e: Gained carrier Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.720 [INFO][5079] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0 calico-kube-controllers-594ffc4984- calico-system 0dcea30e-abc3-43fe-b161-0a975c6561d9 1074 0 2026-03-07 01:04:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:594ffc4984 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521 calico-kube-controllers-594ffc4984-hbs6d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali045e502c61e [] [] }} ContainerID="54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" Namespace="calico-system" Pod="calico-kube-controllers-594ffc4984-hbs6d" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-" Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.720 [INFO][5079] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" Namespace="calico-system" Pod="calico-kube-controllers-594ffc4984-hbs6d" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.779 [INFO][5090] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" HandleID="k8s-pod-network.54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.795 [INFO][5090] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" HandleID="k8s-pod-network.54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fbf50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", "pod":"calico-kube-controllers-594ffc4984-hbs6d", "timestamp":"2026-03-07 01:05:04.779029825 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00036f080)} Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.795 [INFO][5090] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.795 [INFO][5090] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.795 [INFO][5090] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521' Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.800 [INFO][5090] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.808 [INFO][5090] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.815 [INFO][5090] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.820 [INFO][5090] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.824 [INFO][5090] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.825 [INFO][5090] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.828 [INFO][5090] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.837 [INFO][5090] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:05:04.941740 containerd[1476]: 2026-03-07 01:05:04.859 [INFO][5090] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.200/26] block=192.168.26.192/26 handle="k8s-pod-network.54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:05:04.942865 containerd[1476]: 2026-03-07 01:05:04.859 [INFO][5090] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.200/26] handle="k8s-pod-network.54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" host="ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521" Mar 7 01:05:04.942865 containerd[1476]: 2026-03-07 01:05:04.859 [INFO][5090] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:04.942865 containerd[1476]: 2026-03-07 01:05:04.859 [INFO][5090] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.200/26] IPv6=[] ContainerID="54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" HandleID="k8s-pod-network.54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:04.942865 containerd[1476]: 2026-03-07 01:05:04.872 [INFO][5079] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" Namespace="calico-system" Pod="calico-kube-controllers-594ffc4984-hbs6d" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0", GenerateName:"calico-kube-controllers-594ffc4984-", Namespace:"calico-system", SelfLink:"", UID:"0dcea30e-abc3-43fe-b161-0a975c6561d9", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"594ffc4984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"", Pod:"calico-kube-controllers-594ffc4984-hbs6d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali045e502c61e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:04.942865 containerd[1476]: 2026-03-07 01:05:04.873 [INFO][5079] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.200/32] ContainerID="54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" Namespace="calico-system" Pod="calico-kube-controllers-594ffc4984-hbs6d" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:04.942865 containerd[1476]: 2026-03-07 01:05:04.873 [INFO][5079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali045e502c61e ContainerID="54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" Namespace="calico-system" Pod="calico-kube-controllers-594ffc4984-hbs6d" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:04.942865 containerd[1476]: 2026-03-07 01:05:04.888 [INFO][5079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" Namespace="calico-system" Pod="calico-kube-controllers-594ffc4984-hbs6d" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:04.943297 containerd[1476]: 2026-03-07 01:05:04.890 [INFO][5079] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" Namespace="calico-system" Pod="calico-kube-controllers-594ffc4984-hbs6d" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0", GenerateName:"calico-kube-controllers-594ffc4984-", Namespace:"calico-system", SelfLink:"", UID:"0dcea30e-abc3-43fe-b161-0a975c6561d9", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"594ffc4984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a", Pod:"calico-kube-controllers-594ffc4984-hbs6d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali045e502c61e", MAC:"12:91:ba:72:a7:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:04.943297 containerd[1476]: 2026-03-07 01:05:04.934 [INFO][5079] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a" Namespace="calico-system" Pod="calico-kube-controllers-594ffc4984-hbs6d" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:05.042308 containerd[1476]: time="2026-03-07T01:05:05.041923867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:05:05.049221 containerd[1476]: time="2026-03-07T01:05:05.042100220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:05:05.049221 containerd[1476]: time="2026-03-07T01:05:05.045154763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:05:05.049221 containerd[1476]: time="2026-03-07T01:05:05.045383638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:05:05.160582 systemd[1]: Started cri-containerd-54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a.scope - libcontainer container 54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a. Mar 7 01:05:05.319995 containerd[1476]: time="2026-03-07T01:05:05.319938958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-594ffc4984-hbs6d,Uid:0dcea30e-abc3-43fe-b161-0a975c6561d9,Namespace:calico-system,Attempt:1,} returns sandbox id \"54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a\"" Mar 7 01:05:05.385432 containerd[1476]: time="2026-03-07T01:05:05.384484701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:05.387999 containerd[1476]: time="2026-03-07T01:05:05.386969958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:05:05.391630 containerd[1476]: time="2026-03-07T01:05:05.389641138Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:05.395309 containerd[1476]: time="2026-03-07T01:05:05.395261902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:05.396928 containerd[1476]: time="2026-03-07T01:05:05.396866914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.064051814s" Mar 7 01:05:05.397033 containerd[1476]: time="2026-03-07T01:05:05.396933969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:05:05.402837 containerd[1476]: time="2026-03-07T01:05:05.402769702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:05:05.409372 containerd[1476]: time="2026-03-07T01:05:05.409285162Z" level=info msg="CreateContainer within sandbox \"96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:05:05.434532 containerd[1476]: time="2026-03-07T01:05:05.434385193Z" level=info msg="CreateContainer within sandbox \"96be865aec4798ae00ef3b45ce544f51d9a8d0871d8ce0ad37db6b6bab1bced4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8e43fe52c547558dc63360d6bf2a892fa3cd8be80a9626cf2fe5b8137d769005\"" Mar 7 01:05:05.436052 containerd[1476]: time="2026-03-07T01:05:05.435981164Z" level=info msg="StartContainer for \"8e43fe52c547558dc63360d6bf2a892fa3cd8be80a9626cf2fe5b8137d769005\"" Mar 7 01:05:05.491582 systemd[1]: Started cri-containerd-8e43fe52c547558dc63360d6bf2a892fa3cd8be80a9626cf2fe5b8137d769005.scope - libcontainer container 8e43fe52c547558dc63360d6bf2a892fa3cd8be80a9626cf2fe5b8137d769005. Mar 7 01:05:05.556422 containerd[1476]: time="2026-03-07T01:05:05.556146650Z" level=info msg="StartContainer for \"8e43fe52c547558dc63360d6bf2a892fa3cd8be80a9626cf2fe5b8137d769005\" returns successfully" Mar 7 01:05:05.601389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount473735738.mount: Deactivated successfully. Mar 7 01:05:06.522944 systemd-networkd[1361]: cali045e502c61e: Gained IPv6LL Mar 7 01:05:07.329071 containerd[1476]: time="2026-03-07T01:05:07.328661275Z" level=info msg="StopPodSandbox for \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\"" Mar 7 01:05:07.523565 containerd[1476]: 2026-03-07 01:05:07.431 [WARNING][5228] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"233af786-f839-4b49-bfb9-77d5d44842dc", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76", Pod:"coredns-66bc5c9577-r4h6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0cbf6d9b05", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:07.523565 containerd[1476]: 2026-03-07 01:05:07.431 [INFO][5228] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Mar 7 01:05:07.523565 containerd[1476]: 2026-03-07 01:05:07.431 [INFO][5228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" iface="eth0" netns="" Mar 7 01:05:07.523565 containerd[1476]: 2026-03-07 01:05:07.431 [INFO][5228] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Mar 7 01:05:07.523565 containerd[1476]: 2026-03-07 01:05:07.431 [INFO][5228] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Mar 7 01:05:07.523565 containerd[1476]: 2026-03-07 01:05:07.496 [INFO][5237] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" HandleID="k8s-pod-network.94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:05:07.523565 containerd[1476]: 2026-03-07 01:05:07.496 [INFO][5237] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:07.523565 containerd[1476]: 2026-03-07 01:05:07.496 [INFO][5237] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:07.523565 containerd[1476]: 2026-03-07 01:05:07.514 [WARNING][5237] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" HandleID="k8s-pod-network.94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:05:07.523565 containerd[1476]: 2026-03-07 01:05:07.514 [INFO][5237] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" HandleID="k8s-pod-network.94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:05:07.523565 containerd[1476]: 2026-03-07 01:05:07.517 [INFO][5237] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:07.523565 containerd[1476]: 2026-03-07 01:05:07.521 [INFO][5228] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Mar 7 01:05:07.525044 containerd[1476]: time="2026-03-07T01:05:07.523610439Z" level=info msg="TearDown network for sandbox \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\" successfully" Mar 7 01:05:07.525044 containerd[1476]: time="2026-03-07T01:05:07.523646250Z" level=info msg="StopPodSandbox for \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\" returns successfully" Mar 7 01:05:07.525044 containerd[1476]: time="2026-03-07T01:05:07.524713877Z" level=info msg="RemovePodSandbox for \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\"" Mar 7 01:05:07.525044 containerd[1476]: time="2026-03-07T01:05:07.524758845Z" level=info msg="Forcibly stopping sandbox \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\"" Mar 7 01:05:07.713377 containerd[1476]: 2026-03-07 01:05:07.619 [WARNING][5251] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"233af786-f839-4b49-bfb9-77d5d44842dc", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"5696addc01719dcf2cb52caf82dbf3ea92772d3f7abe88837827d5dd6d30ba76", Pod:"coredns-66bc5c9577-r4h6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0cbf6d9b05", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:07.713377 containerd[1476]: 2026-03-07 01:05:07.619 [INFO][5251] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Mar 7 01:05:07.713377 containerd[1476]: 2026-03-07 01:05:07.619 [INFO][5251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" iface="eth0" netns="" Mar 7 01:05:07.713377 containerd[1476]: 2026-03-07 01:05:07.620 [INFO][5251] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Mar 7 01:05:07.713377 containerd[1476]: 2026-03-07 01:05:07.620 [INFO][5251] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Mar 7 01:05:07.713377 containerd[1476]: 2026-03-07 01:05:07.681 [INFO][5258] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" HandleID="k8s-pod-network.94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:05:07.713377 containerd[1476]: 2026-03-07 01:05:07.681 [INFO][5258] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:07.713377 containerd[1476]: 2026-03-07 01:05:07.681 [INFO][5258] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:07.713377 containerd[1476]: 2026-03-07 01:05:07.696 [WARNING][5258] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" HandleID="k8s-pod-network.94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:05:07.713377 containerd[1476]: 2026-03-07 01:05:07.696 [INFO][5258] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" HandleID="k8s-pod-network.94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--r4h6f-eth0" Mar 7 01:05:07.713377 containerd[1476]: 2026-03-07 01:05:07.701 [INFO][5258] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:07.713377 containerd[1476]: 2026-03-07 01:05:07.708 [INFO][5251] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd" Mar 7 01:05:07.714940 containerd[1476]: time="2026-03-07T01:05:07.713386318Z" level=info msg="TearDown network for sandbox \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\" successfully" Mar 7 01:05:07.722745 containerd[1476]: time="2026-03-07T01:05:07.722426304Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:05:07.722745 containerd[1476]: time="2026-03-07T01:05:07.722537672Z" level=info msg="RemovePodSandbox \"94ca1a947d37e62e578e7dd8dbfc5675725cc3984d421ffb1cd0730f565253cd\" returns successfully" Mar 7 01:05:07.724110 containerd[1476]: time="2026-03-07T01:05:07.723700774Z" level=info msg="StopPodSandbox for \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\"" Mar 7 01:05:07.910679 containerd[1476]: 2026-03-07 01:05:07.837 [WARNING][5273] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--678dd9665c--r2nrk-eth0" Mar 7 01:05:07.910679 containerd[1476]: 2026-03-07 01:05:07.838 [INFO][5273] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Mar 7 01:05:07.910679 containerd[1476]: 2026-03-07 01:05:07.838 [INFO][5273] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" iface="eth0" netns="" Mar 7 01:05:07.910679 containerd[1476]: 2026-03-07 01:05:07.838 [INFO][5273] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Mar 7 01:05:07.910679 containerd[1476]: 2026-03-07 01:05:07.838 [INFO][5273] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Mar 7 01:05:07.910679 containerd[1476]: 2026-03-07 01:05:07.887 [INFO][5281] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" HandleID="k8s-pod-network.c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--678dd9665c--r2nrk-eth0" Mar 7 01:05:07.910679 containerd[1476]: 2026-03-07 01:05:07.888 [INFO][5281] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:07.910679 containerd[1476]: 2026-03-07 01:05:07.888 [INFO][5281] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:07.910679 containerd[1476]: 2026-03-07 01:05:07.899 [WARNING][5281] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" HandleID="k8s-pod-network.c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--678dd9665c--r2nrk-eth0" Mar 7 01:05:07.910679 containerd[1476]: 2026-03-07 01:05:07.900 [INFO][5281] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" HandleID="k8s-pod-network.c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--678dd9665c--r2nrk-eth0" Mar 7 01:05:07.910679 containerd[1476]: 2026-03-07 01:05:07.905 [INFO][5281] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:07.910679 containerd[1476]: 2026-03-07 01:05:07.907 [INFO][5273] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Mar 7 01:05:07.911589 containerd[1476]: time="2026-03-07T01:05:07.911218145Z" level=info msg="TearDown network for sandbox \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\" successfully" Mar 7 01:05:07.911589 containerd[1476]: time="2026-03-07T01:05:07.911258196Z" level=info msg="StopPodSandbox for \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\" returns successfully" Mar 7 01:05:07.912020 containerd[1476]: time="2026-03-07T01:05:07.911763044Z" level=info msg="RemovePodSandbox for \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\"" Mar 7 01:05:07.912020 containerd[1476]: time="2026-03-07T01:05:07.911803105Z" level=info msg="Forcibly stopping sandbox \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\"" Mar 7 01:05:08.093635 containerd[1476]: 2026-03-07 01:05:07.998 [WARNING][5295] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--678dd9665c--r2nrk-eth0" Mar 7 01:05:08.093635 containerd[1476]: 2026-03-07 01:05:08.000 [INFO][5295] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Mar 7 01:05:08.093635 containerd[1476]: 2026-03-07 01:05:08.000 [INFO][5295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" iface="eth0" netns="" Mar 7 01:05:08.093635 containerd[1476]: 2026-03-07 01:05:08.000 [INFO][5295] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Mar 7 01:05:08.093635 containerd[1476]: 2026-03-07 01:05:08.000 [INFO][5295] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Mar 7 01:05:08.093635 containerd[1476]: 2026-03-07 01:05:08.067 [INFO][5302] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" HandleID="k8s-pod-network.c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--678dd9665c--r2nrk-eth0" Mar 7 01:05:08.093635 containerd[1476]: 2026-03-07 01:05:08.068 [INFO][5302] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:08.093635 containerd[1476]: 2026-03-07 01:05:08.068 [INFO][5302] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:08.093635 containerd[1476]: 2026-03-07 01:05:08.084 [WARNING][5302] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" HandleID="k8s-pod-network.c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--678dd9665c--r2nrk-eth0" Mar 7 01:05:08.093635 containerd[1476]: 2026-03-07 01:05:08.084 [INFO][5302] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" HandleID="k8s-pod-network.c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-whisker--678dd9665c--r2nrk-eth0" Mar 7 01:05:08.093635 containerd[1476]: 2026-03-07 01:05:08.087 [INFO][5302] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:08.093635 containerd[1476]: 2026-03-07 01:05:08.090 [INFO][5295] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0" Mar 7 01:05:08.093635 containerd[1476]: time="2026-03-07T01:05:08.093519943Z" level=info msg="TearDown network for sandbox \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\" successfully" Mar 7 01:05:08.102435 containerd[1476]: time="2026-03-07T01:05:08.102253905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:05:08.102780 containerd[1476]: time="2026-03-07T01:05:08.102451236Z" level=info msg="RemovePodSandbox \"c053fc738763ffc1d7bcfa33ff97ab1eff5f2fbd4d84b4f2d6dc68ebeaf944c0\" returns successfully" Mar 7 01:05:08.104218 containerd[1476]: time="2026-03-07T01:05:08.103981411Z" level=info msg="StopPodSandbox for \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\"" Mar 7 01:05:08.285632 containerd[1476]: 2026-03-07 01:05:08.191 [WARNING][5317] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fad7ec34-4cf5-4a59-a390-83631ed6b6c6", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89", Pod:"csi-node-driver-hhvgj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f6ce361693", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:08.285632 containerd[1476]: 2026-03-07 01:05:08.191 [INFO][5317] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Mar 7 01:05:08.285632 containerd[1476]: 2026-03-07 01:05:08.191 [INFO][5317] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" iface="eth0" netns="" Mar 7 01:05:08.285632 containerd[1476]: 2026-03-07 01:05:08.191 [INFO][5317] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Mar 7 01:05:08.285632 containerd[1476]: 2026-03-07 01:05:08.191 [INFO][5317] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Mar 7 01:05:08.285632 containerd[1476]: 2026-03-07 01:05:08.251 [INFO][5324] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" HandleID="k8s-pod-network.fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:05:08.285632 containerd[1476]: 2026-03-07 01:05:08.252 [INFO][5324] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:08.285632 containerd[1476]: 2026-03-07 01:05:08.252 [INFO][5324] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:08.285632 containerd[1476]: 2026-03-07 01:05:08.270 [WARNING][5324] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" HandleID="k8s-pod-network.fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:05:08.285632 containerd[1476]: 2026-03-07 01:05:08.270 [INFO][5324] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" HandleID="k8s-pod-network.fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:05:08.285632 containerd[1476]: 2026-03-07 01:05:08.277 [INFO][5324] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:08.285632 containerd[1476]: 2026-03-07 01:05:08.282 [INFO][5317] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Mar 7 01:05:08.286870 containerd[1476]: time="2026-03-07T01:05:08.286533100Z" level=info msg="TearDown network for sandbox \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\" successfully" Mar 7 01:05:08.286870 containerd[1476]: time="2026-03-07T01:05:08.286575503Z" level=info msg="StopPodSandbox for \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\" returns successfully" Mar 7 01:05:08.287369 containerd[1476]: time="2026-03-07T01:05:08.287235101Z" level=info msg="RemovePodSandbox for \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\"" Mar 7 01:05:08.287369 containerd[1476]: time="2026-03-07T01:05:08.287279955Z" level=info msg="Forcibly stopping sandbox \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\"" Mar 7 01:05:08.450929 containerd[1476]: 2026-03-07 01:05:08.374 [WARNING][5339] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fad7ec34-4cf5-4a59-a390-83631ed6b6c6", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"8468a9ca842c7ca5fa352f52e3a32c275c6d4d69c5c8bbb8734836bca42d5a89", Pod:"csi-node-driver-hhvgj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f6ce361693", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:08.450929 containerd[1476]: 2026-03-07 01:05:08.374 [INFO][5339] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Mar 7 01:05:08.450929 containerd[1476]: 2026-03-07 01:05:08.374 [INFO][5339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" iface="eth0" netns="" Mar 7 01:05:08.450929 containerd[1476]: 2026-03-07 01:05:08.374 [INFO][5339] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Mar 7 01:05:08.450929 containerd[1476]: 2026-03-07 01:05:08.374 [INFO][5339] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Mar 7 01:05:08.450929 containerd[1476]: 2026-03-07 01:05:08.425 [INFO][5347] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" HandleID="k8s-pod-network.fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:05:08.450929 containerd[1476]: 2026-03-07 01:05:08.426 [INFO][5347] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:08.450929 containerd[1476]: 2026-03-07 01:05:08.427 [INFO][5347] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:08.450929 containerd[1476]: 2026-03-07 01:05:08.441 [WARNING][5347] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" HandleID="k8s-pod-network.fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:05:08.450929 containerd[1476]: 2026-03-07 01:05:08.441 [INFO][5347] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" HandleID="k8s-pod-network.fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-csi--node--driver--hhvgj-eth0" Mar 7 01:05:08.450929 containerd[1476]: 2026-03-07 01:05:08.443 [INFO][5347] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:08.450929 containerd[1476]: 2026-03-07 01:05:08.447 [INFO][5339] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191" Mar 7 01:05:08.453316 containerd[1476]: time="2026-03-07T01:05:08.451531779Z" level=info msg="TearDown network for sandbox \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\" successfully" Mar 7 01:05:08.462289 containerd[1476]: time="2026-03-07T01:05:08.462159801Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:05:08.464374 containerd[1476]: time="2026-03-07T01:05:08.463361444Z" level=info msg="RemovePodSandbox \"fba14c8c9025fcd95847cdd49b803ec2580529db7ca1049634bdf99855034191\" returns successfully" Mar 7 01:05:08.470842 containerd[1476]: time="2026-03-07T01:05:08.470798046Z" level=info msg="StopPodSandbox for \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\"" Mar 7 01:05:08.508150 containerd[1476]: time="2026-03-07T01:05:08.508092898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:08.510614 containerd[1476]: time="2026-03-07T01:05:08.510552995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:05:08.512540 containerd[1476]: time="2026-03-07T01:05:08.512500112Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:08.518244 containerd[1476]: time="2026-03-07T01:05:08.518169992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:08.519090 containerd[1476]: time="2026-03-07T01:05:08.518890862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.116054162s" Mar 7 01:05:08.519090 containerd[1476]: time="2026-03-07T01:05:08.518955162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:05:08.549376 containerd[1476]: time="2026-03-07T01:05:08.546733742Z" level=info msg="CreateContainer within sandbox \"54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:05:08.574971 containerd[1476]: time="2026-03-07T01:05:08.574897389Z" level=info msg="CreateContainer within sandbox \"54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d4a3f5cc404ca15c401e2bcc1a5c866d1acc27693a9c8736241bc242d6c5bbb7\"" Mar 7 01:05:08.577294 containerd[1476]: time="2026-03-07T01:05:08.576306561Z" level=info msg="StartContainer for \"d4a3f5cc404ca15c401e2bcc1a5c866d1acc27693a9c8736241bc242d6c5bbb7\"" Mar 7 01:05:08.660981 systemd[1]: Started cri-containerd-d4a3f5cc404ca15c401e2bcc1a5c866d1acc27693a9c8736241bc242d6c5bbb7.scope - libcontainer container d4a3f5cc404ca15c401e2bcc1a5c866d1acc27693a9c8736241bc242d6c5bbb7. Mar 7 01:05:08.673626 containerd[1476]: 2026-03-07 01:05:08.563 [WARNING][5362] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"9d68eacf-53a9-41f5-a9a3-d1b563899713", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797", Pod:"goldmane-cccfbd5cf-zlg5q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia0623d6e5be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:08.673626 containerd[1476]: 2026-03-07 01:05:08.563 [INFO][5362] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Mar 7 01:05:08.673626 containerd[1476]: 2026-03-07 01:05:08.563 [INFO][5362] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" iface="eth0" netns="" Mar 7 01:05:08.673626 containerd[1476]: 2026-03-07 01:05:08.563 [INFO][5362] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Mar 7 01:05:08.673626 containerd[1476]: 2026-03-07 01:05:08.563 [INFO][5362] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Mar 7 01:05:08.673626 containerd[1476]: 2026-03-07 01:05:08.644 [INFO][5372] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" HandleID="k8s-pod-network.90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:05:08.673626 containerd[1476]: 2026-03-07 01:05:08.644 [INFO][5372] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:08.673626 containerd[1476]: 2026-03-07 01:05:08.644 [INFO][5372] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:08.673626 containerd[1476]: 2026-03-07 01:05:08.663 [WARNING][5372] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" HandleID="k8s-pod-network.90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:05:08.673626 containerd[1476]: 2026-03-07 01:05:08.663 [INFO][5372] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" HandleID="k8s-pod-network.90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:05:08.673626 containerd[1476]: 2026-03-07 01:05:08.666 [INFO][5372] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:08.673626 containerd[1476]: 2026-03-07 01:05:08.670 [INFO][5362] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Mar 7 01:05:08.673626 containerd[1476]: time="2026-03-07T01:05:08.673286689Z" level=info msg="TearDown network for sandbox \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\" successfully" Mar 7 01:05:08.673626 containerd[1476]: time="2026-03-07T01:05:08.673459121Z" level=info msg="StopPodSandbox for \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\" returns successfully" Mar 7 01:05:08.675699 containerd[1476]: time="2026-03-07T01:05:08.674964105Z" level=info msg="RemovePodSandbox for \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\"" Mar 7 01:05:08.675699 containerd[1476]: time="2026-03-07T01:05:08.675005397Z" level=info msg="Forcibly stopping sandbox \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\"" Mar 7 01:05:08.759621 containerd[1476]: time="2026-03-07T01:05:08.759247309Z" level=info msg="StartContainer for \"d4a3f5cc404ca15c401e2bcc1a5c866d1acc27693a9c8736241bc242d6c5bbb7\" returns successfully" Mar 7 01:05:08.826039 containerd[1476]: 2026-03-07 01:05:08.750 [WARNING][5411] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"9d68eacf-53a9-41f5-a9a3-d1b563899713", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"dd130c5c5e15f6f6085c47c30c1940e267419b14e19dfcc4a828799a72c9a797", Pod:"goldmane-cccfbd5cf-zlg5q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia0623d6e5be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:08.826039 containerd[1476]: 2026-03-07 01:05:08.750 [INFO][5411] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Mar 7 01:05:08.826039 containerd[1476]: 2026-03-07 01:05:08.751 [INFO][5411] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" iface="eth0" netns="" Mar 7 01:05:08.826039 containerd[1476]: 2026-03-07 01:05:08.751 [INFO][5411] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Mar 7 01:05:08.826039 containerd[1476]: 2026-03-07 01:05:08.751 [INFO][5411] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Mar 7 01:05:08.826039 containerd[1476]: 2026-03-07 01:05:08.807 [INFO][5425] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" HandleID="k8s-pod-network.90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:05:08.826039 containerd[1476]: 2026-03-07 01:05:08.808 [INFO][5425] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:08.826039 containerd[1476]: 2026-03-07 01:05:08.808 [INFO][5425] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:08.826039 containerd[1476]: 2026-03-07 01:05:08.818 [WARNING][5425] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" HandleID="k8s-pod-network.90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:05:08.826039 containerd[1476]: 2026-03-07 01:05:08.818 [INFO][5425] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" HandleID="k8s-pod-network.90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-goldmane--cccfbd5cf--zlg5q-eth0" Mar 7 01:05:08.826039 containerd[1476]: 2026-03-07 01:05:08.820 [INFO][5425] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:08.826039 containerd[1476]: 2026-03-07 01:05:08.823 [INFO][5411] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3" Mar 7 01:05:08.826039 containerd[1476]: time="2026-03-07T01:05:08.825998082Z" level=info msg="TearDown network for sandbox \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\" successfully" Mar 7 01:05:08.831854 containerd[1476]: time="2026-03-07T01:05:08.831758370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:05:08.832016 containerd[1476]: time="2026-03-07T01:05:08.831964952Z" level=info msg="RemovePodSandbox \"90adfeb7aa15fb6671efd020fd979cbc73a9baa134c7f25eed583a3bd20a4bf3\" returns successfully" Mar 7 01:05:08.832838 containerd[1476]: time="2026-03-07T01:05:08.832800317Z" level=info msg="StopPodSandbox for \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\"" Mar 7 01:05:08.941585 kubelet[2610]: I0307 01:05:08.940586 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-875ff5fdb-prpjv" podStartSLOduration=6.452361095 podStartE2EDuration="18.939941681s" podCreationTimestamp="2026-03-07 01:04:50 +0000 UTC" firstStartedPulling="2026-03-07 01:04:52.913003394 +0000 UTC m=+45.778568342" lastFinishedPulling="2026-03-07 01:05:05.400583971 +0000 UTC m=+58.266148928" observedRunningTime="2026-03-07 01:05:05.890947123 +0000 UTC m=+58.756512098" watchObservedRunningTime="2026-03-07 01:05:08.939941681 +0000 UTC m=+61.805506656" Mar 7 01:05:08.964290 ntpd[1436]: Listen normally on 17 cali045e502c61e [fe80::ecee:eeff:feee:eeee%14]:123 Mar 7 01:05:08.965422 ntpd[1436]: 7 Mar 01:05:08 ntpd[1436]: Listen normally on 17 cali045e502c61e [fe80::ecee:eeff:feee:eeee%14]:123 Mar 7 01:05:09.036542 containerd[1476]: 2026-03-07 01:05:08.926 [WARNING][5451] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0", GenerateName:"calico-apiserver-85c759574b-", Namespace:"calico-system", SelfLink:"", UID:"286f9f87-acb9-4bde-81b8-c11f70245864", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c759574b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b", Pod:"calico-apiserver-85c759574b-shlqv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif14ec3435a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:09.036542 containerd[1476]: 2026-03-07 01:05:08.927 [INFO][5451] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Mar 7 01:05:09.036542 containerd[1476]: 2026-03-07 01:05:08.927 [INFO][5451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" iface="eth0" netns="" Mar 7 01:05:09.036542 containerd[1476]: 2026-03-07 01:05:08.927 [INFO][5451] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Mar 7 01:05:09.036542 containerd[1476]: 2026-03-07 01:05:08.927 [INFO][5451] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Mar 7 01:05:09.036542 containerd[1476]: 2026-03-07 01:05:09.006 [INFO][5466] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" HandleID="k8s-pod-network.8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:05:09.036542 containerd[1476]: 2026-03-07 01:05:09.006 [INFO][5466] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:09.036542 containerd[1476]: 2026-03-07 01:05:09.007 [INFO][5466] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:09.036542 containerd[1476]: 2026-03-07 01:05:09.026 [WARNING][5466] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" HandleID="k8s-pod-network.8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:05:09.036542 containerd[1476]: 2026-03-07 01:05:09.026 [INFO][5466] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" HandleID="k8s-pod-network.8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:05:09.036542 containerd[1476]: 2026-03-07 01:05:09.028 [INFO][5466] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:09.036542 containerd[1476]: 2026-03-07 01:05:09.031 [INFO][5451] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Mar 7 01:05:09.036542 containerd[1476]: time="2026-03-07T01:05:09.034800989Z" level=info msg="TearDown network for sandbox \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\" successfully" Mar 7 01:05:09.036542 containerd[1476]: time="2026-03-07T01:05:09.034911468Z" level=info msg="StopPodSandbox for \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\" returns successfully" Mar 7 01:05:09.040534 containerd[1476]: time="2026-03-07T01:05:09.039985464Z" level=info msg="RemovePodSandbox for \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\"" Mar 7 01:05:09.040534 containerd[1476]: time="2026-03-07T01:05:09.040040900Z" level=info msg="Forcibly stopping sandbox \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\"" Mar 7 01:05:09.062081 kubelet[2610]: I0307 01:05:09.061952 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-594ffc4984-hbs6d" podStartSLOduration=37.866134737 podStartE2EDuration="41.061922585s" podCreationTimestamp="2026-03-07 01:04:28 +0000 UTC" firstStartedPulling="2026-03-07 01:05:05.325503944 +0000 UTC m=+58.191068894" lastFinishedPulling="2026-03-07 01:05:08.521291778 +0000 UTC m=+61.386856742" observedRunningTime="2026-03-07 01:05:08.94244982 +0000 UTC m=+61.808014797" watchObservedRunningTime="2026-03-07 01:05:09.061922585 +0000 UTC m=+61.927487561" Mar 7 01:05:09.156387 containerd[1476]: 2026-03-07 01:05:09.110 [WARNING][5493] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0", GenerateName:"calico-apiserver-85c759574b-", Namespace:"calico-system", SelfLink:"", UID:"286f9f87-acb9-4bde-81b8-c11f70245864", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c759574b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"38b3a795d76e3b4008b677f5e94f9a2839b23dc7e6ac6a6f9233d2991d41b00b", Pod:"calico-apiserver-85c759574b-shlqv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif14ec3435a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:09.156387 containerd[1476]: 2026-03-07 01:05:09.110 [INFO][5493] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Mar 7 01:05:09.156387 containerd[1476]: 2026-03-07 01:05:09.110 [INFO][5493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" iface="eth0" netns="" Mar 7 01:05:09.156387 containerd[1476]: 2026-03-07 01:05:09.111 [INFO][5493] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Mar 7 01:05:09.156387 containerd[1476]: 2026-03-07 01:05:09.111 [INFO][5493] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Mar 7 01:05:09.156387 containerd[1476]: 2026-03-07 01:05:09.141 [INFO][5501] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" HandleID="k8s-pod-network.8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:05:09.156387 containerd[1476]: 2026-03-07 01:05:09.141 [INFO][5501] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:09.156387 containerd[1476]: 2026-03-07 01:05:09.141 [INFO][5501] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:09.156387 containerd[1476]: 2026-03-07 01:05:09.150 [WARNING][5501] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" HandleID="k8s-pod-network.8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:05:09.156387 containerd[1476]: 2026-03-07 01:05:09.150 [INFO][5501] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" HandleID="k8s-pod-network.8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--shlqv-eth0" Mar 7 01:05:09.156387 containerd[1476]: 2026-03-07 01:05:09.153 [INFO][5501] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:09.156387 containerd[1476]: 2026-03-07 01:05:09.154 [INFO][5493] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de" Mar 7 01:05:09.157235 containerd[1476]: time="2026-03-07T01:05:09.156461801Z" level=info msg="TearDown network for sandbox \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\" successfully" Mar 7 01:05:09.161646 containerd[1476]: time="2026-03-07T01:05:09.161586803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:05:09.161872 containerd[1476]: time="2026-03-07T01:05:09.161679925Z" level=info msg="RemovePodSandbox \"8c1299e21d321cca71a35f75d1d676ee7a353a17620f0c6fc69b44e7f1f5c8de\" returns successfully" Mar 7 01:05:09.162635 containerd[1476]: time="2026-03-07T01:05:09.162600352Z" level=info msg="StopPodSandbox for \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\"" Mar 7 01:05:09.252709 containerd[1476]: 2026-03-07 01:05:09.209 [WARNING][5515] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0", GenerateName:"calico-kube-controllers-594ffc4984-", Namespace:"calico-system", SelfLink:"", UID:"0dcea30e-abc3-43fe-b161-0a975c6561d9", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"594ffc4984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a", Pod:"calico-kube-controllers-594ffc4984-hbs6d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali045e502c61e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:09.252709 containerd[1476]: 2026-03-07 01:05:09.209 [INFO][5515] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Mar 7 01:05:09.252709 containerd[1476]: 2026-03-07 01:05:09.209 [INFO][5515] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" iface="eth0" netns="" Mar 7 01:05:09.252709 containerd[1476]: 2026-03-07 01:05:09.209 [INFO][5515] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Mar 7 01:05:09.252709 containerd[1476]: 2026-03-07 01:05:09.209 [INFO][5515] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Mar 7 01:05:09.252709 containerd[1476]: 2026-03-07 01:05:09.236 [INFO][5522] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" HandleID="k8s-pod-network.7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:09.252709 containerd[1476]: 2026-03-07 01:05:09.237 [INFO][5522] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:09.252709 containerd[1476]: 2026-03-07 01:05:09.237 [INFO][5522] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:09.252709 containerd[1476]: 2026-03-07 01:05:09.247 [WARNING][5522] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" HandleID="k8s-pod-network.7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:09.252709 containerd[1476]: 2026-03-07 01:05:09.247 [INFO][5522] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" HandleID="k8s-pod-network.7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:09.252709 containerd[1476]: 2026-03-07 01:05:09.249 [INFO][5522] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:09.252709 containerd[1476]: 2026-03-07 01:05:09.251 [INFO][5515] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Mar 7 01:05:09.253674 containerd[1476]: time="2026-03-07T01:05:09.252731770Z" level=info msg="TearDown network for sandbox \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\" successfully" Mar 7 01:05:09.253674 containerd[1476]: time="2026-03-07T01:05:09.252767771Z" level=info msg="StopPodSandbox for \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\" returns successfully" Mar 7 01:05:09.253674 containerd[1476]: time="2026-03-07T01:05:09.253529413Z" level=info msg="RemovePodSandbox for \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\"" Mar 7 01:05:09.253674 containerd[1476]: time="2026-03-07T01:05:09.253569087Z" level=info msg="Forcibly stopping sandbox \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\"" Mar 7 01:05:09.351187 containerd[1476]: 2026-03-07 01:05:09.300 [WARNING][5536] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0", GenerateName:"calico-kube-controllers-594ffc4984-", Namespace:"calico-system", SelfLink:"", UID:"0dcea30e-abc3-43fe-b161-0a975c6561d9", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"594ffc4984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"54118b738fd993e00291863a1cac3ec09c06182c6f0fee24ccd8b96338bdef1a", Pod:"calico-kube-controllers-594ffc4984-hbs6d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali045e502c61e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:09.351187 containerd[1476]: 2026-03-07 01:05:09.300 [INFO][5536] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Mar 7 01:05:09.351187 containerd[1476]: 2026-03-07 01:05:09.300 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" iface="eth0" netns="" Mar 7 01:05:09.351187 containerd[1476]: 2026-03-07 01:05:09.300 [INFO][5536] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Mar 7 01:05:09.351187 containerd[1476]: 2026-03-07 01:05:09.300 [INFO][5536] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Mar 7 01:05:09.351187 containerd[1476]: 2026-03-07 01:05:09.329 [INFO][5544] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" HandleID="k8s-pod-network.7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:09.351187 containerd[1476]: 2026-03-07 01:05:09.329 [INFO][5544] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:09.351187 containerd[1476]: 2026-03-07 01:05:09.329 [INFO][5544] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:09.351187 containerd[1476]: 2026-03-07 01:05:09.339 [WARNING][5544] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" HandleID="k8s-pod-network.7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:09.351187 containerd[1476]: 2026-03-07 01:05:09.339 [INFO][5544] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" HandleID="k8s-pod-network.7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--kube--controllers--594ffc4984--hbs6d-eth0" Mar 7 01:05:09.351187 containerd[1476]: 2026-03-07 01:05:09.344 [INFO][5544] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:09.351187 containerd[1476]: 2026-03-07 01:05:09.347 [INFO][5536] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a" Mar 7 01:05:09.354934 containerd[1476]: time="2026-03-07T01:05:09.351496663Z" level=info msg="TearDown network for sandbox \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\" successfully" Mar 7 01:05:09.359849 containerd[1476]: time="2026-03-07T01:05:09.359802091Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:05:09.359987 containerd[1476]: time="2026-03-07T01:05:09.359906776Z" level=info msg="RemovePodSandbox \"7d6eb01dc34f8b8e9d687aacd1367aaa64a75016d67080e5a64c17d719e7e59a\" returns successfully" Mar 7 01:05:09.360483 containerd[1476]: time="2026-03-07T01:05:09.360451356Z" level=info msg="StopPodSandbox for \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\"" Mar 7 01:05:09.454389 containerd[1476]: 2026-03-07 01:05:09.409 [WARNING][5559] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0", GenerateName:"calico-apiserver-85c759574b-", Namespace:"calico-system", SelfLink:"", UID:"f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c759574b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d", Pod:"calico-apiserver-85c759574b-842dd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali30dad147c7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:09.454389 containerd[1476]: 2026-03-07 01:05:09.409 [INFO][5559] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Mar 7 01:05:09.454389 containerd[1476]: 2026-03-07 01:05:09.409 [INFO][5559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" iface="eth0" netns="" Mar 7 01:05:09.454389 containerd[1476]: 2026-03-07 01:05:09.409 [INFO][5559] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Mar 7 01:05:09.454389 containerd[1476]: 2026-03-07 01:05:09.409 [INFO][5559] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Mar 7 01:05:09.454389 containerd[1476]: 2026-03-07 01:05:09.438 [INFO][5566] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" HandleID="k8s-pod-network.9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:05:09.454389 containerd[1476]: 2026-03-07 01:05:09.438 [INFO][5566] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:09.454389 containerd[1476]: 2026-03-07 01:05:09.438 [INFO][5566] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:09.454389 containerd[1476]: 2026-03-07 01:05:09.447 [WARNING][5566] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" HandleID="k8s-pod-network.9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:05:09.454389 containerd[1476]: 2026-03-07 01:05:09.447 [INFO][5566] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" HandleID="k8s-pod-network.9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:05:09.454389 containerd[1476]: 2026-03-07 01:05:09.450 [INFO][5566] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:09.454389 containerd[1476]: 2026-03-07 01:05:09.452 [INFO][5559] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Mar 7 01:05:09.456256 containerd[1476]: time="2026-03-07T01:05:09.454418964Z" level=info msg="TearDown network for sandbox \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\" successfully" Mar 7 01:05:09.456256 containerd[1476]: time="2026-03-07T01:05:09.454455492Z" level=info msg="StopPodSandbox for \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\" returns successfully" Mar 7 01:05:09.456256 containerd[1476]: time="2026-03-07T01:05:09.455252910Z" level=info msg="RemovePodSandbox for \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\"" Mar 7 01:05:09.456256 containerd[1476]: time="2026-03-07T01:05:09.455293497Z" level=info msg="Forcibly stopping sandbox \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\"" Mar 7 01:05:09.548796 containerd[1476]: 2026-03-07 01:05:09.501 [WARNING][5581] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0", GenerateName:"calico-apiserver-85c759574b-", Namespace:"calico-system", SelfLink:"", UID:"f43e7b82-4264-485d-8e5b-8aa4fa5b5ef9", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c759574b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"f177566a6ba801fd7eab19bb1486b1d938f7f42e99ef409cb51bb34c878b478d", Pod:"calico-apiserver-85c759574b-842dd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali30dad147c7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:09.548796 containerd[1476]: 2026-03-07 01:05:09.501 [INFO][5581] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Mar 7 01:05:09.548796 containerd[1476]: 2026-03-07 01:05:09.501 [INFO][5581] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" iface="eth0" netns="" Mar 7 01:05:09.548796 containerd[1476]: 2026-03-07 01:05:09.501 [INFO][5581] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Mar 7 01:05:09.548796 containerd[1476]: 2026-03-07 01:05:09.501 [INFO][5581] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Mar 7 01:05:09.548796 containerd[1476]: 2026-03-07 01:05:09.529 [INFO][5588] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" HandleID="k8s-pod-network.9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:05:09.548796 containerd[1476]: 2026-03-07 01:05:09.529 [INFO][5588] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:09.548796 containerd[1476]: 2026-03-07 01:05:09.529 [INFO][5588] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:09.548796 containerd[1476]: 2026-03-07 01:05:09.543 [WARNING][5588] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" HandleID="k8s-pod-network.9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:05:09.548796 containerd[1476]: 2026-03-07 01:05:09.543 [INFO][5588] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" HandleID="k8s-pod-network.9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-calico--apiserver--85c759574b--842dd-eth0" Mar 7 01:05:09.548796 containerd[1476]: 2026-03-07 01:05:09.545 [INFO][5588] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:09.548796 containerd[1476]: 2026-03-07 01:05:09.547 [INFO][5581] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1" Mar 7 01:05:09.548796 containerd[1476]: time="2026-03-07T01:05:09.548767372Z" level=info msg="TearDown network for sandbox \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\" successfully" Mar 7 01:05:09.553714 containerd[1476]: time="2026-03-07T01:05:09.553663018Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:05:09.553883 containerd[1476]: time="2026-03-07T01:05:09.553754457Z" level=info msg="RemovePodSandbox \"9960240ad7c4554d54344b645ed76bc32b463072eb711db967e13fcae99a2ad1\" returns successfully" Mar 7 01:05:09.554526 containerd[1476]: time="2026-03-07T01:05:09.554481034Z" level=info msg="StopPodSandbox for \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\"" Mar 7 01:05:09.645969 containerd[1476]: 2026-03-07 01:05:09.604 [WARNING][5602] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"680dd4ad-45eb-49a1-b4c7-db4a6b1269ec", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a", Pod:"coredns-66bc5c9577-v5hrm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f6052bd86f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:09.645969 containerd[1476]: 2026-03-07 01:05:09.605 [INFO][5602] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Mar 7 01:05:09.645969 containerd[1476]: 2026-03-07 01:05:09.605 [INFO][5602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" iface="eth0" netns="" Mar 7 01:05:09.645969 containerd[1476]: 2026-03-07 01:05:09.605 [INFO][5602] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Mar 7 01:05:09.645969 containerd[1476]: 2026-03-07 01:05:09.605 [INFO][5602] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Mar 7 01:05:09.645969 containerd[1476]: 2026-03-07 01:05:09.631 [INFO][5610] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" HandleID="k8s-pod-network.acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:05:09.645969 containerd[1476]: 2026-03-07 01:05:09.631 [INFO][5610] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:09.645969 containerd[1476]: 2026-03-07 01:05:09.631 [INFO][5610] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:09.645969 containerd[1476]: 2026-03-07 01:05:09.640 [WARNING][5610] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" HandleID="k8s-pod-network.acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:05:09.645969 containerd[1476]: 2026-03-07 01:05:09.640 [INFO][5610] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" HandleID="k8s-pod-network.acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:05:09.645969 containerd[1476]: 2026-03-07 01:05:09.642 [INFO][5610] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:09.645969 containerd[1476]: 2026-03-07 01:05:09.644 [INFO][5602] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Mar 7 01:05:09.648020 containerd[1476]: time="2026-03-07T01:05:09.645977021Z" level=info msg="TearDown network for sandbox \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\" successfully" Mar 7 01:05:09.648020 containerd[1476]: time="2026-03-07T01:05:09.646014094Z" level=info msg="StopPodSandbox for \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\" returns successfully" Mar 7 01:05:09.648020 containerd[1476]: time="2026-03-07T01:05:09.647173339Z" level=info msg="RemovePodSandbox for \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\"" Mar 7 01:05:09.648020 containerd[1476]: time="2026-03-07T01:05:09.647214281Z" level=info msg="Forcibly stopping sandbox \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\"" Mar 7 01:05:09.744997 containerd[1476]: 2026-03-07 01:05:09.694 [WARNING][5624] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"680dd4ad-45eb-49a1-b4c7-db4a6b1269ec", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-862cad7eb13e39bfd521", ContainerID:"41b5821256114e50bfe662c31d70bf58e5053bc9bb62a041220e7ed37dc0c13a", Pod:"coredns-66bc5c9577-v5hrm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f6052bd86f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:05:09.744997 containerd[1476]: 2026-03-07 01:05:09.695 [INFO][5624] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Mar 7 01:05:09.744997 containerd[1476]: 2026-03-07 01:05:09.695 [INFO][5624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" iface="eth0" netns="" Mar 7 01:05:09.744997 containerd[1476]: 2026-03-07 01:05:09.695 [INFO][5624] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Mar 7 01:05:09.744997 containerd[1476]: 2026-03-07 01:05:09.695 [INFO][5624] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Mar 7 01:05:09.744997 containerd[1476]: 2026-03-07 01:05:09.724 [INFO][5631] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" HandleID="k8s-pod-network.acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:05:09.744997 containerd[1476]: 2026-03-07 01:05:09.725 [INFO][5631] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:05:09.744997 containerd[1476]: 2026-03-07 01:05:09.725 [INFO][5631] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:05:09.744997 containerd[1476]: 2026-03-07 01:05:09.737 [WARNING][5631] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" HandleID="k8s-pod-network.acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:05:09.744997 containerd[1476]: 2026-03-07 01:05:09.737 [INFO][5631] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" HandleID="k8s-pod-network.acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Workload="ci--4081--3--6--nightly--20260306--2100--862cad7eb13e39bfd521-k8s-coredns--66bc5c9577--v5hrm-eth0" Mar 7 01:05:09.744997 containerd[1476]: 2026-03-07 01:05:09.740 [INFO][5631] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:05:09.744997 containerd[1476]: 2026-03-07 01:05:09.742 [INFO][5624] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5" Mar 7 01:05:09.745877 containerd[1476]: time="2026-03-07T01:05:09.745078519Z" level=info msg="TearDown network for sandbox \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\" successfully" Mar 7 01:05:09.749920 containerd[1476]: time="2026-03-07T01:05:09.749828960Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:05:09.750149 containerd[1476]: time="2026-03-07T01:05:09.749927223Z" level=info msg="RemovePodSandbox \"acf0281de477570c0d935ecce13076391351afed025d9a05e028794e3ce570e5\" returns successfully" Mar 7 01:05:13.731168 kubelet[2610]: I0307 01:05:13.730182 2610 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:05:28.251485 systemd[1]: Started sshd@12-10.128.0.69:22-68.220.241.50:34026.service - OpenSSH per-connection server daemon (68.220.241.50:34026). Mar 7 01:05:28.491506 sshd[5700]: Accepted publickey for core from 68.220.241.50 port 34026 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:05:28.493526 sshd[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:05:28.499666 systemd-logind[1454]: New session 10 of user core. Mar 7 01:05:28.509593 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:05:28.772719 sshd[5700]: pam_unix(sshd:session): session closed for user core Mar 7 01:05:28.778876 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:05:28.780209 systemd[1]: sshd@12-10.128.0.69:22-68.220.241.50:34026.service: Deactivated successfully. Mar 7 01:05:28.786642 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:05:28.792309 systemd-logind[1454]: Removed session 10. Mar 7 01:05:33.815491 systemd[1]: Started sshd@13-10.128.0.69:22-68.220.241.50:50754.service - OpenSSH per-connection server daemon (68.220.241.50:50754). Mar 7 01:05:34.053383 sshd[5760]: Accepted publickey for core from 68.220.241.50 port 50754 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:05:34.054922 sshd[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:05:34.064543 systemd-logind[1454]: New session 11 of user core. Mar 7 01:05:34.069623 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:05:34.303409 sshd[5760]: pam_unix(sshd:session): session closed for user core Mar 7 01:05:34.308952 systemd[1]: sshd@13-10.128.0.69:22-68.220.241.50:50754.service: Deactivated successfully. Mar 7 01:05:34.312107 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:05:34.313485 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:05:34.314829 systemd-logind[1454]: Removed session 11. Mar 7 01:05:39.351747 systemd[1]: Started sshd@14-10.128.0.69:22-68.220.241.50:50768.service - OpenSSH per-connection server daemon (68.220.241.50:50768). Mar 7 01:05:39.572878 sshd[5805]: Accepted publickey for core from 68.220.241.50 port 50768 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:05:39.574793 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:05:39.582408 systemd-logind[1454]: New session 12 of user core. Mar 7 01:05:39.589550 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:05:39.821930 sshd[5805]: pam_unix(sshd:session): session closed for user core Mar 7 01:05:39.827744 systemd[1]: sshd@14-10.128.0.69:22-68.220.241.50:50768.service: Deactivated successfully. Mar 7 01:05:39.830392 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:05:39.831816 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:05:39.833258 systemd-logind[1454]: Removed session 12. Mar 7 01:05:43.823054 systemd[1]: sshd@9-10.128.0.69:22-103.213.116.242:49814.service: Deactivated successfully. Mar 7 01:05:44.872355 systemd[1]: Started sshd@15-10.128.0.69:22-68.220.241.50:45584.service - OpenSSH per-connection server daemon (68.220.241.50:45584). Mar 7 01:05:45.090379 sshd[5821]: Accepted publickey for core from 68.220.241.50 port 45584 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:05:45.092580 sshd[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:05:45.100232 systemd-logind[1454]: New session 13 of user core. Mar 7 01:05:45.105669 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:05:45.350867 sshd[5821]: pam_unix(sshd:session): session closed for user core Mar 7 01:05:45.363013 systemd[1]: sshd@15-10.128.0.69:22-68.220.241.50:45584.service: Deactivated successfully. Mar 7 01:05:45.367491 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:05:45.369455 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:05:45.373099 systemd-logind[1454]: Removed session 13. Mar 7 01:05:50.397764 systemd[1]: Started sshd@16-10.128.0.69:22-68.220.241.50:45600.service - OpenSSH per-connection server daemon (68.220.241.50:45600). Mar 7 01:05:50.625993 sshd[5858]: Accepted publickey for core from 68.220.241.50 port 45600 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:05:50.627065 sshd[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:05:50.633829 systemd-logind[1454]: New session 14 of user core. Mar 7 01:05:50.639554 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:05:50.877429 sshd[5858]: pam_unix(sshd:session): session closed for user core Mar 7 01:05:50.882223 systemd[1]: sshd@16-10.128.0.69:22-68.220.241.50:45600.service: Deactivated successfully. Mar 7 01:05:50.885195 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:05:50.889780 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:05:50.894640 systemd-logind[1454]: Removed session 14. Mar 7 01:05:50.927084 systemd[1]: Started sshd@17-10.128.0.69:22-68.220.241.50:45606.service - OpenSSH per-connection server daemon (68.220.241.50:45606). Mar 7 01:05:51.144486 sshd[5872]: Accepted publickey for core from 68.220.241.50 port 45606 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:05:51.145933 sshd[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:05:51.151888 systemd-logind[1454]: New session 15 of user core. Mar 7 01:05:51.163683 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:05:51.469871 sshd[5872]: pam_unix(sshd:session): session closed for user core Mar 7 01:05:51.482590 systemd[1]: sshd@17-10.128.0.69:22-68.220.241.50:45606.service: Deactivated successfully. Mar 7 01:05:51.482875 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:05:51.492263 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:05:51.496789 systemd-logind[1454]: Removed session 15. Mar 7 01:05:51.514835 systemd[1]: Started sshd@18-10.128.0.69:22-68.220.241.50:45608.service - OpenSSH per-connection server daemon (68.220.241.50:45608). Mar 7 01:05:51.742534 sshd[5884]: Accepted publickey for core from 68.220.241.50 port 45608 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:05:51.744761 sshd[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:05:51.752467 systemd-logind[1454]: New session 16 of user core. Mar 7 01:05:51.758639 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:05:52.034006 sshd[5884]: pam_unix(sshd:session): session closed for user core Mar 7 01:05:52.042414 systemd[1]: sshd@18-10.128.0.69:22-68.220.241.50:45608.service: Deactivated successfully. Mar 7 01:05:52.046171 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:05:52.048015 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:05:52.049877 systemd-logind[1454]: Removed session 16. Mar 7 01:05:57.082839 systemd[1]: Started sshd@19-10.128.0.69:22-68.220.241.50:48994.service - OpenSSH per-connection server daemon (68.220.241.50:48994). Mar 7 01:05:57.313120 sshd[5940]: Accepted publickey for core from 68.220.241.50 port 48994 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:05:57.315323 sshd[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:05:57.323673 systemd-logind[1454]: New session 17 of user core. Mar 7 01:05:57.330602 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:05:57.583068 sshd[5940]: pam_unix(sshd:session): session closed for user core Mar 7 01:05:57.590980 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:05:57.591745 systemd[1]: sshd@19-10.128.0.69:22-68.220.241.50:48994.service: Deactivated successfully. Mar 7 01:05:57.596510 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:05:57.598086 systemd-logind[1454]: Removed session 17. Mar 7 01:05:57.632775 systemd[1]: Started sshd@20-10.128.0.69:22-68.220.241.50:49002.service - OpenSSH per-connection server daemon (68.220.241.50:49002). Mar 7 01:05:57.869178 sshd[5952]: Accepted publickey for core from 68.220.241.50 port 49002 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:05:57.871587 sshd[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:05:57.879589 systemd-logind[1454]: New session 18 of user core. Mar 7 01:05:57.884675 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:05:58.203642 sshd[5952]: pam_unix(sshd:session): session closed for user core Mar 7 01:05:58.210101 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:05:58.211655 systemd[1]: sshd@20-10.128.0.69:22-68.220.241.50:49002.service: Deactivated successfully. Mar 7 01:05:58.215066 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:05:58.216737 systemd-logind[1454]: Removed session 18. Mar 7 01:05:58.251767 systemd[1]: Started sshd@21-10.128.0.69:22-68.220.241.50:49012.service - OpenSSH per-connection server daemon (68.220.241.50:49012). Mar 7 01:05:58.494535 sshd[5963]: Accepted publickey for core from 68.220.241.50 port 49012 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:05:58.496671 sshd[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:05:58.503435 systemd-logind[1454]: New session 19 of user core. Mar 7 01:05:58.510681 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:05:59.407503 sshd[5963]: pam_unix(sshd:session): session closed for user core Mar 7 01:05:59.420608 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:05:59.421906 systemd[1]: sshd@21-10.128.0.69:22-68.220.241.50:49012.service: Deactivated successfully. Mar 7 01:05:59.430040 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:05:59.454318 systemd-logind[1454]: Removed session 19. Mar 7 01:05:59.463987 systemd[1]: Started sshd@22-10.128.0.69:22-68.220.241.50:49022.service - OpenSSH per-connection server daemon (68.220.241.50:49022). Mar 7 01:05:59.694300 sshd[5984]: Accepted publickey for core from 68.220.241.50 port 49022 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:05:59.697154 sshd[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:05:59.704759 systemd-logind[1454]: New session 20 of user core. Mar 7 01:05:59.711590 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:06:00.136031 sshd[5984]: pam_unix(sshd:session): session closed for user core Mar 7 01:06:00.143027 systemd[1]: sshd@22-10.128.0.69:22-68.220.241.50:49022.service: Deactivated successfully. Mar 7 01:06:00.146466 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:06:00.147970 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:06:00.149904 systemd-logind[1454]: Removed session 20. Mar 7 01:06:00.186786 systemd[1]: Started sshd@23-10.128.0.69:22-68.220.241.50:49038.service - OpenSSH per-connection server daemon (68.220.241.50:49038). Mar 7 01:06:00.414063 sshd[5998]: Accepted publickey for core from 68.220.241.50 port 49038 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:06:00.416120 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:06:00.423972 systemd-logind[1454]: New session 21 of user core. Mar 7 01:06:00.428591 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:06:00.666873 sshd[5998]: pam_unix(sshd:session): session closed for user core Mar 7 01:06:00.674359 systemd-logind[1454]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:06:00.675888 systemd[1]: sshd@23-10.128.0.69:22-68.220.241.50:49038.service: Deactivated successfully. Mar 7 01:06:00.680133 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:06:00.681552 systemd-logind[1454]: Removed session 21. Mar 7 01:06:00.895299 systemd[1]: run-containerd-runc-k8s.io-2f7951d37f9b315c87bc367f9215badec009ccf9b31ff6e68f1373229670c671-runc.wb0cVQ.mount: Deactivated successfully. Mar 7 01:06:05.711750 systemd[1]: Started sshd@24-10.128.0.69:22-68.220.241.50:32956.service - OpenSSH per-connection server daemon (68.220.241.50:32956). Mar 7 01:06:05.930922 sshd[6032]: Accepted publickey for core from 68.220.241.50 port 32956 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:06:05.933235 sshd[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:06:05.940229 systemd-logind[1454]: New session 22 of user core. Mar 7 01:06:05.945634 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:06:06.184147 sshd[6032]: pam_unix(sshd:session): session closed for user core Mar 7 01:06:06.189734 systemd[1]: sshd@24-10.128.0.69:22-68.220.241.50:32956.service: Deactivated successfully. Mar 7 01:06:06.193473 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:06:06.196059 systemd-logind[1454]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:06:06.198082 systemd-logind[1454]: Removed session 22. Mar 7 01:06:11.232261 systemd[1]: Started sshd@25-10.128.0.69:22-68.220.241.50:32962.service - OpenSSH per-connection server daemon (68.220.241.50:32962). Mar 7 01:06:11.466267 sshd[6069]: Accepted publickey for core from 68.220.241.50 port 32962 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:06:11.468978 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:06:11.477047 systemd-logind[1454]: New session 23 of user core. Mar 7 01:06:11.482679 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:06:11.737012 sshd[6069]: pam_unix(sshd:session): session closed for user core Mar 7 01:06:11.745228 systemd-logind[1454]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:06:11.746962 systemd[1]: sshd@25-10.128.0.69:22-68.220.241.50:32962.service: Deactivated successfully. Mar 7 01:06:11.751674 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:06:11.758103 systemd-logind[1454]: Removed session 23. Mar 7 01:06:16.787821 systemd[1]: Started sshd@26-10.128.0.69:22-68.220.241.50:47510.service - OpenSSH per-connection server daemon (68.220.241.50:47510). Mar 7 01:06:17.042535 sshd[6094]: Accepted publickey for core from 68.220.241.50 port 47510 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:06:17.044594 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:06:17.052140 systemd-logind[1454]: New session 24 of user core. Mar 7 01:06:17.061660 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:06:17.313585 sshd[6094]: pam_unix(sshd:session): session closed for user core Mar 7 01:06:17.321542 systemd-logind[1454]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:06:17.322601 systemd[1]: sshd@26-10.128.0.69:22-68.220.241.50:47510.service: Deactivated successfully. Mar 7 01:06:17.326952 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:06:17.329065 systemd-logind[1454]: Removed session 24.