Sep 13 00:09:16.138402 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:09:16.138453 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:16.138472 kernel: BIOS-provided physical RAM map: Sep 13 00:09:16.138485 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 13 00:09:16.138497 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 13 00:09:16.138510 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 13 00:09:16.138526 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 13 00:09:16.138545 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 13 00:09:16.138569 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Sep 13 00:09:16.138581 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Sep 13 00:09:16.138594 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Sep 13 00:09:16.138608 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Sep 13 00:09:16.138622 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 13 00:09:16.138635 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 13 00:09:16.138659 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 13 00:09:16.138675 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 13 00:09:16.138690 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 13 00:09:16.138705 kernel: NX (Execute Disable) protection: active Sep 13 00:09:16.138721 kernel: APIC: Static calls initialized Sep 13 00:09:16.138736 kernel: efi: EFI v2.7 by EDK II Sep 13 00:09:16.138753 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Sep 13 00:09:16.138777 kernel: SMBIOS 2.4 present. Sep 13 00:09:16.138792 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 13 00:09:16.138806 kernel: Hypervisor detected: KVM Sep 13 00:09:16.138827 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:09:16.138844 kernel: kvm-clock: using sched offset of 13335336984 cycles Sep 13 00:09:16.138876 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:09:16.138892 kernel: tsc: Detected 2299.998 MHz processor Sep 13 00:09:16.138908 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:09:16.138923 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:09:16.138940 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 13 00:09:16.138957 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 13 00:09:16.138973 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:09:16.139023 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 13 00:09:16.139041 kernel: Using GB pages for direct mapping Sep 13 00:09:16.139058 kernel: Secure boot disabled Sep 13 00:09:16.139075 kernel: ACPI: Early table checksum verification disabled Sep 13 00:09:16.139092 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 13 00:09:16.139109 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 13 00:09:16.139125 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 13 00:09:16.139148 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 13 00:09:16.139168 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 13 00:09:16.139184 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 13 00:09:16.139202 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 13 00:09:16.139220 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 13 00:09:16.139238 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 13 00:09:16.139255 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 13 00:09:16.139278 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 13 00:09:16.139295 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 13 00:09:16.139310 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 13 00:09:16.139327 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 13 00:09:16.139344 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 13 00:09:16.139362 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 13 00:09:16.139392 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 13 00:09:16.139410 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 13 00:09:16.139427 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 13 00:09:16.139450 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 13 00:09:16.139467 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:09:16.139484 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:09:16.139503 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 00:09:16.139520 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 13 00:09:16.139538 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 13 00:09:16.139556 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Sep 13 00:09:16.139574 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Sep 13 00:09:16.139591 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Sep 13 00:09:16.139614 kernel: Zone ranges: Sep 13 00:09:16.139632 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:09:16.139649 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 00:09:16.139666 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 13 00:09:16.139684 kernel: Movable zone start for each node Sep 13 00:09:16.139702 kernel: Early memory node ranges Sep 13 00:09:16.139719 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 13 00:09:16.139736 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 13 00:09:16.139754 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Sep 13 00:09:16.139776 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 13 00:09:16.139793 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 13 00:09:16.139811 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 13 00:09:16.139829 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:09:16.139846 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 13 00:09:16.139879 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 13 00:09:16.139905 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 13 00:09:16.139921 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 13 00:09:16.139939 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 13 00:09:16.139962 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:09:16.139980 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:09:16.139998 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:09:16.140016 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:09:16.140031 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:09:16.140050 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:09:16.140068 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:09:16.140086 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:09:16.140104 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 13 00:09:16.140126 kernel: Booting paravirtualized kernel on KVM Sep 13 00:09:16.140144 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:09:16.140163 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:09:16.140181 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:09:16.140199 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:09:16.140217 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:09:16.140233 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:09:16.140250 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:09:16.140270 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:16.140293 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:09:16.140311 kernel: random: crng init done Sep 13 00:09:16.140329 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 13 00:09:16.140347 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:09:16.140365 kernel: Fallback order for Node 0: 0 Sep 13 00:09:16.140393 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Sep 13 00:09:16.140411 kernel: Policy zone: Normal Sep 13 00:09:16.140429 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:09:16.140451 kernel: software IO TLB: area num 2. Sep 13 00:09:16.140470 kernel: Memory: 7513400K/7860584K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 346924K reserved, 0K cma-reserved) Sep 13 00:09:16.140487 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:09:16.140506 kernel: Kernel/User page tables isolation: enabled Sep 13 00:09:16.140524 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:09:16.140542 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:09:16.140560 kernel: Dynamic Preempt: voluntary Sep 13 00:09:16.140579 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:09:16.140599 kernel: rcu: RCU event tracing is enabled. Sep 13 00:09:16.140636 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:09:16.140656 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:09:16.140676 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:09:16.140699 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:09:16.140719 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:09:16.140738 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:09:16.140756 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:09:16.140775 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:09:16.140795 kernel: Console: colour dummy device 80x25 Sep 13 00:09:16.140818 kernel: printk: console [ttyS0] enabled Sep 13 00:09:16.140838 kernel: ACPI: Core revision 20230628 Sep 13 00:09:16.140895 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:09:16.140914 kernel: x2apic enabled Sep 13 00:09:16.140933 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:09:16.140952 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 13 00:09:16.140972 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 13 00:09:16.140991 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 13 00:09:16.141016 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 13 00:09:16.141035 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 13 00:09:16.141055 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:09:16.141074 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 13 00:09:16.141094 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 13 00:09:16.141112 kernel: Spectre V2 : Mitigation: IBRS Sep 13 00:09:16.141131 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:09:16.141149 kernel: RETBleed: Mitigation: IBRS Sep 13 00:09:16.141168 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:09:16.141193 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 13 00:09:16.141211 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:09:16.141230 kernel: MDS: Mitigation: Clear CPU buffers Sep 13 00:09:16.141249 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:09:16.141267 kernel: active return thunk: its_return_thunk Sep 13 00:09:16.141287 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:09:16.141307 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:09:16.141326 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:09:16.141346 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:09:16.141370 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:09:16.141397 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:09:16.141417 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:09:16.141436 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:09:16.141456 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:09:16.141475 kernel: landlock: Up and running. Sep 13 00:09:16.141495 kernel: SELinux: Initializing. Sep 13 00:09:16.141515 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:09:16.141535 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:09:16.141559 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 13 00:09:16.141578 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:16.141598 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:16.141617 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:16.141636 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 13 00:09:16.141656 kernel: signal: max sigframe size: 1776 Sep 13 00:09:16.141675 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:09:16.141695 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:09:16.141714 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:09:16.141737 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:09:16.141757 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:09:16.141776 kernel: .... node #0, CPUs: #1 Sep 13 00:09:16.141797 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 13 00:09:16.141816 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:09:16.141835 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:09:16.141872 kernel: smpboot: Max logical packages: 1 Sep 13 00:09:16.141898 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 13 00:09:16.141919 kernel: devtmpfs: initialized Sep 13 00:09:16.141946 kernel: x86/mm: Memory block size: 128MB Sep 13 00:09:16.141966 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 13 00:09:16.141986 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:09:16.142004 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:09:16.142020 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:09:16.142039 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:09:16.142058 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:09:16.142077 kernel: audit: type=2000 audit(1757722154.205:1): state=initialized audit_enabled=0 res=1 Sep 13 00:09:16.142100 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:09:16.142120 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:09:16.142138 kernel: cpuidle: using governor menu Sep 13 00:09:16.142157 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:09:16.142176 kernel: dca service started, version 1.12.1 Sep 13 00:09:16.142195 kernel: PCI: Using configuration type 1 for base access Sep 13 00:09:16.142214 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:09:16.142233 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:09:16.142252 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:09:16.142276 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:09:16.142295 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:09:16.142314 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:09:16.142332 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:09:16.142350 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:09:16.142370 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 13 00:09:16.142398 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:09:16.142417 kernel: ACPI: Interpreter enabled Sep 13 00:09:16.142437 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:09:16.142460 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:09:16.142478 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:09:16.142497 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 13 00:09:16.142515 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 13 00:09:16.142534 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:09:16.144322 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:09:16.144614 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 13 00:09:16.144816 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 13 00:09:16.144842 kernel: PCI host bridge to bus 0000:00 Sep 13 00:09:16.145062 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:09:16.145243 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:09:16.145422 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:09:16.145592 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 13 00:09:16.145763 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:09:16.146183 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:09:16.146441 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Sep 13 00:09:16.146654 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 13 00:09:16.146907 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 13 00:09:16.147138 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Sep 13 00:09:16.147365 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Sep 13 00:09:16.147569 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Sep 13 00:09:16.147772 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:09:16.148054 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Sep 13 00:09:16.148250 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Sep 13 00:09:16.148460 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:09:16.148652 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 13 00:09:16.148845 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Sep 13 00:09:16.148965 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:09:16.148983 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:09:16.149001 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:09:16.149018 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:09:16.149035 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:09:16.149053 kernel: iommu: Default domain type: Translated Sep 13 00:09:16.149070 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:09:16.149088 kernel: efivars: Registered efivars operations Sep 13 00:09:16.149106 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:09:16.149128 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:09:16.149146 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 13 00:09:16.149163 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 13 00:09:16.149181 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 13 00:09:16.149198 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 13 00:09:16.149216 kernel: vgaarb: loaded Sep 13 00:09:16.149232 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:09:16.149249 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:09:16.149267 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:09:16.149290 kernel: pnp: PnP ACPI init Sep 13 00:09:16.149308 kernel: pnp: PnP ACPI: found 7 devices Sep 13 00:09:16.149327 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:09:16.149345 kernel: NET: Registered PF_INET protocol family Sep 13 00:09:16.149364 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:09:16.149393 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 13 00:09:16.149412 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:09:16.149431 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:09:16.149449 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 13 00:09:16.149472 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 13 00:09:16.149491 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:09:16.149510 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:09:16.149529 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:09:16.149547 kernel: NET: Registered PF_XDP protocol family Sep 13 00:09:16.149748 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:09:16.151141 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:09:16.151357 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:09:16.151557 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 13 00:09:16.151763 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:09:16.151791 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:09:16.151812 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 00:09:16.151829 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 13 00:09:16.151846 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:09:16.151891 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 13 00:09:16.151909 kernel: clocksource: Switched to clocksource tsc Sep 13 00:09:16.151933 kernel: Initialise system trusted keyrings Sep 13 00:09:16.151950 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 13 00:09:16.151967 kernel: Key type asymmetric registered Sep 13 00:09:16.151984 kernel: Asymmetric key parser 'x509' registered Sep 13 00:09:16.152001 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:09:16.152018 kernel: io scheduler mq-deadline registered Sep 13 00:09:16.152035 kernel: io scheduler kyber registered Sep 13 00:09:16.152053 kernel: io scheduler bfq registered Sep 13 00:09:16.152070 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:09:16.152093 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 13 00:09:16.152304 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 13 00:09:16.152331 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 13 00:09:16.152535 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 13 00:09:16.152561 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 13 00:09:16.152755 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 13 00:09:16.152782 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:09:16.152801 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:09:16.152821 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 00:09:16.152846 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 13 00:09:16.152900 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 13 00:09:16.153097 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 13 00:09:16.153125 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:09:16.153145 kernel: i8042: Warning: Keylock active Sep 13 00:09:16.153164 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:09:16.153184 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:09:16.153391 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 13 00:09:16.153581 kernel: rtc_cmos 00:00: registered as rtc0 Sep 13 00:09:16.153757 kernel: rtc_cmos 00:00: setting system clock to 2025-09-13T00:09:15 UTC (1757722155) Sep 13 00:09:16.156011 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 13 00:09:16.156049 kernel: intel_pstate: CPU model not supported Sep 13 00:09:16.156071 kernel: pstore: Using crash dump compression: deflate Sep 13 00:09:16.156091 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:09:16.156112 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:09:16.156131 kernel: Segment Routing with IPv6 Sep 13 00:09:16.156158 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:09:16.156178 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:09:16.156196 kernel: Key type dns_resolver registered Sep 13 00:09:16.156216 kernel: IPI shorthand broadcast: enabled Sep 13 00:09:16.156236 kernel: sched_clock: Marking stable (913005190, 161164027)->(1146557187, -72387970) Sep 13 00:09:16.156256 kernel: registered taskstats version 1 Sep 13 00:09:16.156276 kernel: Loading compiled-in X.509 certificates Sep 13 00:09:16.156295 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:09:16.156315 kernel: Key type .fscrypt registered Sep 13 00:09:16.156339 kernel: Key type fscrypt-provisioning registered Sep 13 00:09:16.156359 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:09:16.156387 kernel: ima: No architecture policies found Sep 13 00:09:16.156406 kernel: clk: Disabling unused clocks Sep 13 00:09:16.156426 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:09:16.156446 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:09:16.156466 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:09:16.156485 kernel: Run /init as init process Sep 13 00:09:16.156509 kernel: with arguments: Sep 13 00:09:16.156529 kernel: /init Sep 13 00:09:16.156549 kernel: with environment: Sep 13 00:09:16.156568 kernel: HOME=/ Sep 13 00:09:16.156588 kernel: TERM=linux Sep 13 00:09:16.156608 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:09:16.156627 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 13 00:09:16.156650 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:09:16.156679 systemd[1]: Detected virtualization google. Sep 13 00:09:16.156700 systemd[1]: Detected architecture x86-64. Sep 13 00:09:16.156721 systemd[1]: Running in initrd. Sep 13 00:09:16.156741 systemd[1]: No hostname configured, using default hostname. Sep 13 00:09:16.156759 systemd[1]: Hostname set to . Sep 13 00:09:16.156780 systemd[1]: Initializing machine ID from random generator. Sep 13 00:09:16.156801 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:09:16.156822 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:16.156847 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:16.156917 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:09:16.156938 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:09:16.156957 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:09:16.156976 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:09:16.156998 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:09:16.157039 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:09:16.157060 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:16.157081 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:16.157122 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:09:16.157149 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:09:16.157170 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:09:16.157190 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:09:16.157216 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:09:16.157237 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:09:16.157258 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:09:16.157279 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:09:16.157300 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:16.157320 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:16.157341 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:16.157361 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:09:16.157395 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:09:16.157417 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:09:16.157438 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:09:16.157459 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:09:16.157480 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:09:16.157500 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:09:16.157520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:16.157541 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:09:16.157603 systemd-journald[183]: Collecting audit messages is disabled. Sep 13 00:09:16.157654 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:16.157675 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:09:16.157703 systemd-journald[183]: Journal started Sep 13 00:09:16.157745 systemd-journald[183]: Runtime Journal (/run/log/journal/0bd6a0a300b14528ae6639b225f39c81) is 8.0M, max 148.7M, 140.7M free. Sep 13 00:09:16.160952 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:09:16.127838 systemd-modules-load[184]: Inserted module 'overlay' Sep 13 00:09:16.166871 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:09:16.179188 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:09:16.189725 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:16.200021 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:09:16.200064 kernel: Bridge firewalling registered Sep 13 00:09:16.196066 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:09:16.199216 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 13 00:09:16.209633 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:16.217650 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:16.232139 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:16.235783 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:09:16.255897 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:09:16.273167 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:16.277356 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:16.289258 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:09:16.293403 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:16.302468 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:09:16.332599 dracut-cmdline[218]: dracut-dracut-053 Sep 13 00:09:16.337800 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:16.337597 systemd-resolved[213]: Positive Trust Anchors: Sep 13 00:09:16.337616 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:09:16.337695 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:09:16.345672 systemd-resolved[213]: Defaulting to hostname 'linux'. Sep 13 00:09:16.347613 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:09:16.358370 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:16.444909 kernel: SCSI subsystem initialized Sep 13 00:09:16.457905 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:09:16.470896 kernel: iscsi: registered transport (tcp) Sep 13 00:09:16.497674 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:09:16.497770 kernel: QLogic iSCSI HBA Driver Sep 13 00:09:16.553532 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:09:16.560092 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:09:16.607933 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:09:16.608026 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:09:16.608054 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:09:16.655922 kernel: raid6: avx2x4 gen() 17874 MB/s Sep 13 00:09:16.672899 kernel: raid6: avx2x2 gen() 18021 MB/s Sep 13 00:09:16.691025 kernel: raid6: avx2x1 gen() 13823 MB/s Sep 13 00:09:16.691117 kernel: raid6: using algorithm avx2x2 gen() 18021 MB/s Sep 13 00:09:16.708930 kernel: raid6: .... xor() 17456 MB/s, rmw enabled Sep 13 00:09:16.709035 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:09:16.733904 kernel: xor: automatically using best checksumming function avx Sep 13 00:09:16.907894 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:09:16.922171 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:09:16.928121 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:16.974036 systemd-udevd[400]: Using default interface naming scheme 'v255'. Sep 13 00:09:16.981174 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:17.015102 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:09:17.035113 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Sep 13 00:09:17.072012 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:09:17.096141 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:09:17.207594 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:17.226113 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:09:17.279744 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:09:17.303344 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:09:17.361035 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:09:17.361088 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:09:17.316012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:17.393379 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 13 00:09:17.393457 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:09:17.393484 kernel: AES CTR mode by8 optimization enabled Sep 13 00:09:17.333242 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:09:17.357211 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:09:17.436935 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:09:17.437225 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:17.543123 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 13 00:09:17.543520 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 13 00:09:17.543757 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 13 00:09:17.544022 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 13 00:09:17.544248 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:09:17.544484 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:09:17.544520 kernel: GPT:17805311 != 25165823 Sep 13 00:09:17.544543 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:09:17.460985 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:17.569023 kernel: GPT:17805311 != 25165823 Sep 13 00:09:17.569063 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:09:17.569093 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:17.569128 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 13 00:09:17.483167 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:17.483568 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:17.507055 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:17.616451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:17.640913 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (446) Sep 13 00:09:17.653896 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (459) Sep 13 00:09:17.660812 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:09:17.693513 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:17.708465 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 13 00:09:17.725313 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 13 00:09:17.746826 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 13 00:09:17.778309 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 13 00:09:17.778593 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 13 00:09:17.806182 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:09:17.855264 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:17.869513 disk-uuid[540]: Primary Header is updated. Sep 13 00:09:17.869513 disk-uuid[540]: Secondary Entries is updated. Sep 13 00:09:17.869513 disk-uuid[540]: Secondary Header is updated. Sep 13 00:09:17.892618 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:17.928105 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:17.928146 kernel: GPT:disk_guids don't match. Sep 13 00:09:17.928173 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:09:17.928199 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:17.947886 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:18.940040 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:18.940139 disk-uuid[541]: The operation has completed successfully. Sep 13 00:09:19.021955 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:09:19.022169 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:09:19.047069 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:09:19.082263 sh[566]: Success Sep 13 00:09:19.106883 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:09:19.193522 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:09:19.201112 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:09:19.224535 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:09:19.278720 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:09:19.278817 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:19.278844 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:09:19.294984 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:09:19.295059 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:09:19.338916 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:09:19.347200 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:09:19.348194 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:09:19.353258 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:09:19.430015 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:19.430051 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:19.430068 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:09:19.375077 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:09:19.452784 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:09:19.452838 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:09:19.459061 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:09:19.476053 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:19.490514 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:09:19.508330 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:09:19.603250 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:09:19.610128 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:09:19.680447 systemd-networkd[749]: lo: Link UP Sep 13 00:09:19.680460 systemd-networkd[749]: lo: Gained carrier Sep 13 00:09:19.686066 systemd-networkd[749]: Enumeration completed Sep 13 00:09:19.686712 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:19.686719 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:09:19.686925 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:09:19.702392 systemd-networkd[749]: eth0: Link UP Sep 13 00:09:19.746497 ignition[667]: Ignition 2.19.0 Sep 13 00:09:19.702400 systemd-networkd[749]: eth0: Gained carrier Sep 13 00:09:19.746506 ignition[667]: Stage: fetch-offline Sep 13 00:09:19.702418 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:19.746552 ignition[667]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:19.720037 systemd-networkd[749]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c.c.flatcar-212911.internal' to 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c' Sep 13 00:09:19.746563 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:09:19.720061 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.50/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 13 00:09:19.746684 ignition[667]: parsed url from cmdline: "" Sep 13 00:09:19.743188 systemd[1]: Reached target network.target - Network. Sep 13 00:09:19.746691 ignition[667]: no config URL provided Sep 13 00:09:19.752407 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:09:19.746699 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:09:19.780125 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:09:19.746710 ignition[667]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:09:19.832378 unknown[757]: fetched base config from "system" Sep 13 00:09:19.746718 ignition[667]: failed to fetch config: resource requires networking Sep 13 00:09:19.832393 unknown[757]: fetched base config from "system" Sep 13 00:09:19.747215 ignition[667]: Ignition finished successfully Sep 13 00:09:19.832407 unknown[757]: fetched user config from "gcp" Sep 13 00:09:19.822209 ignition[757]: Ignition 2.19.0 Sep 13 00:09:19.835081 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:09:19.822221 ignition[757]: Stage: fetch Sep 13 00:09:19.854235 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:09:19.822462 ignition[757]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:19.927716 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:09:19.822475 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:09:19.946090 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:09:19.822607 ignition[757]: parsed url from cmdline: "" Sep 13 00:09:19.975654 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:09:19.822615 ignition[757]: no config URL provided Sep 13 00:09:20.003710 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:09:19.822624 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:09:20.024222 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:09:19.822636 ignition[757]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:09:20.047215 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:09:19.822663 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 13 00:09:20.069215 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:09:19.826615 ignition[757]: GET result: OK Sep 13 00:09:20.096174 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:09:19.826721 ignition[757]: parsing config with SHA512: e6651b9a810d09317e5ddeea680dcd48caa8ac3e30fc1fd52feaadf136cb76cca313ce9d7c0adfa32d062f603fed01c4ffcf64aebe8472d03acfd403cab79a76 Sep 13 00:09:20.111086 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:09:19.833059 ignition[757]: fetch: fetch complete Sep 13 00:09:19.833069 ignition[757]: fetch: fetch passed Sep 13 00:09:19.833134 ignition[757]: Ignition finished successfully Sep 13 00:09:19.925149 ignition[763]: Ignition 2.19.0 Sep 13 00:09:19.925158 ignition[763]: Stage: kargs Sep 13 00:09:19.925398 ignition[763]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:19.925411 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:09:19.926457 ignition[763]: kargs: kargs passed Sep 13 00:09:19.926540 ignition[763]: Ignition finished successfully Sep 13 00:09:19.970938 ignition[768]: Ignition 2.19.0 Sep 13 00:09:19.970948 ignition[768]: Stage: disks Sep 13 00:09:19.971187 ignition[768]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:19.971200 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:09:19.972809 ignition[768]: disks: disks passed Sep 13 00:09:19.972910 ignition[768]: Ignition finished successfully Sep 13 00:09:20.172923 systemd-fsck[777]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 00:09:20.320026 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:09:20.326081 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:09:20.465926 kernel: EXT4-fs (sda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:09:20.466649 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:09:20.467642 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:09:20.490059 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:09:20.523062 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:09:20.564176 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (785) Sep 13 00:09:20.564225 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:20.564251 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:20.564275 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:09:20.532647 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:09:20.595933 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:09:20.595999 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:09:20.532719 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:09:20.532752 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:09:20.603078 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:09:20.626995 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:09:20.656159 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:09:20.801521 initrd-setup-root[809]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:09:20.813014 initrd-setup-root[816]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:09:20.823793 initrd-setup-root[823]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:09:20.834993 initrd-setup-root[830]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:09:20.984674 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:09:20.990063 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:09:21.010136 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:09:21.047880 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:21.049594 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:09:21.073934 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:09:21.090031 ignition[898]: INFO : Ignition 2.19.0 Sep 13 00:09:21.090031 ignition[898]: INFO : Stage: mount Sep 13 00:09:21.090031 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:21.090031 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:09:21.090031 ignition[898]: INFO : mount: mount passed Sep 13 00:09:21.090031 ignition[898]: INFO : Ignition finished successfully Sep 13 00:09:21.092089 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:09:21.115110 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:09:21.389145 systemd-networkd[749]: eth0: Gained IPv6LL Sep 13 00:09:21.472195 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:09:21.520910 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (909) Sep 13 00:09:21.538627 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:21.538723 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:21.538749 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:09:21.561430 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:09:21.561528 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:09:21.565067 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:09:21.604926 ignition[926]: INFO : Ignition 2.19.0 Sep 13 00:09:21.604926 ignition[926]: INFO : Stage: files Sep 13 00:09:21.619009 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:21.619009 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:09:21.619009 ignition[926]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:09:21.619009 ignition[926]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:09:21.619009 ignition[926]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:09:21.619009 ignition[926]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:09:21.619009 ignition[926]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:09:21.619009 ignition[926]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:09:21.619009 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:09:21.619009 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 00:09:21.614491 unknown[926]: wrote ssh authorized keys file for user: core Sep 13 00:09:21.757878 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:09:22.261883 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:09:22.678498 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:09:23.454955 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:23.454955 ignition[926]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:09:23.494074 ignition[926]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:09:23.494074 ignition[926]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:09:23.494074 ignition[926]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:09:23.494074 ignition[926]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:09:23.494074 ignition[926]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:09:23.494074 ignition[926]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:09:23.494074 ignition[926]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:09:23.494074 ignition[926]: INFO : files: files passed Sep 13 00:09:23.494074 ignition[926]: INFO : Ignition finished successfully Sep 13 00:09:23.462542 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:09:23.481150 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:09:23.527160 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:09:23.538653 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:09:23.712081 initrd-setup-root-after-ignition[954]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:23.712081 initrd-setup-root-after-ignition[954]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:23.538785 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:09:23.748105 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:23.595530 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:09:23.622402 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:09:23.654136 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:09:23.753277 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:09:23.753439 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:09:23.763441 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:09:23.794233 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:09:23.814293 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:09:23.821264 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:09:23.912626 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:09:23.938144 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:09:23.984446 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:23.996403 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:24.006477 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:09:24.042220 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:09:24.042625 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:09:24.071308 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:09:24.071679 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:09:24.088536 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:09:24.104443 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:09:24.133353 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:09:24.143428 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:09:24.161432 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:09:24.200256 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:09:24.200686 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:09:24.217433 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:09:24.234423 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:09:24.234633 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:09:24.275317 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:24.285280 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:24.306282 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:09:24.306451 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:24.328202 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:09:24.328445 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:09:24.359323 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:09:24.359563 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:09:24.379391 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:09:24.379630 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:09:24.449136 ignition[979]: INFO : Ignition 2.19.0 Sep 13 00:09:24.449136 ignition[979]: INFO : Stage: umount Sep 13 00:09:24.449136 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:24.449136 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:09:24.449136 ignition[979]: INFO : umount: umount passed Sep 13 00:09:24.449136 ignition[979]: INFO : Ignition finished successfully Sep 13 00:09:24.406203 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:09:24.446207 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:09:24.457090 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:09:24.457530 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:24.528347 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:09:24.528535 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:09:24.561459 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:09:24.562473 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:09:24.562596 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:09:24.568061 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:09:24.568193 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:09:24.586628 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:09:24.586763 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:09:24.603567 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:09:24.603639 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:09:24.629247 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:09:24.629330 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:09:24.655248 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:09:24.655336 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:09:24.673339 systemd[1]: Stopped target network.target - Network. Sep 13 00:09:24.689146 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:09:24.689266 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:09:24.697330 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:09:24.717286 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:09:24.720991 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:24.733253 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:09:24.759145 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:09:24.767312 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:09:24.767376 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:09:24.800243 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:09:24.800319 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:09:24.808268 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:09:24.808348 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:09:24.840240 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:09:24.840322 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:09:24.860229 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:09:24.860312 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:09:24.868556 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:09:24.873973 systemd-networkd[749]: eth0: DHCPv6 lease lost Sep 13 00:09:24.897285 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:09:24.920653 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:09:24.920798 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:09:24.940092 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:09:24.940387 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:09:24.958213 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:09:24.958272 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:24.972001 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:09:25.025036 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:09:25.025260 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:09:25.046255 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:09:25.046336 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:25.066278 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:09:25.066365 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:25.076322 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:09:25.076401 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:25.115404 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:25.134783 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:09:25.135078 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:25.160662 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:09:25.160788 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:09:25.181122 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:09:25.181225 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:25.199248 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:09:25.199319 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:25.209311 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:09:25.209394 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:09:25.261040 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:09:25.261258 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:09:25.289335 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:09:25.567037 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 13 00:09:25.289434 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:25.334089 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:09:25.358045 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:09:25.358186 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:25.380137 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:09:25.380231 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:09:25.399137 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:09:25.399238 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:25.421159 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:25.421258 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:25.441670 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:09:25.441838 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:09:25.460814 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:09:25.486140 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:09:25.521505 systemd[1]: Switching root. Sep 13 00:09:25.724085 systemd-journald[183]: Journal stopped Sep 13 00:09:16.138402 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:09:16.138453 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:16.138472 kernel: BIOS-provided physical RAM map: Sep 13 00:09:16.138485 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 13 00:09:16.138497 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 13 00:09:16.138510 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 13 00:09:16.138526 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 13 00:09:16.138545 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 13 00:09:16.138569 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Sep 13 00:09:16.138581 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Sep 13 00:09:16.138594 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Sep 13 00:09:16.138608 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Sep 13 00:09:16.138622 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 13 00:09:16.138635 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 13 00:09:16.138659 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 13 00:09:16.138675 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 13 00:09:16.138690 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 13 00:09:16.138705 kernel: NX (Execute Disable) protection: active Sep 13 00:09:16.138721 kernel: APIC: Static calls initialized Sep 13 00:09:16.138736 kernel: efi: EFI v2.7 by EDK II Sep 13 00:09:16.138753 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Sep 13 00:09:16.138777 kernel: SMBIOS 2.4 present. Sep 13 00:09:16.138792 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 13 00:09:16.138806 kernel: Hypervisor detected: KVM Sep 13 00:09:16.138827 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:09:16.138844 kernel: kvm-clock: using sched offset of 13335336984 cycles Sep 13 00:09:16.138876 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:09:16.138892 kernel: tsc: Detected 2299.998 MHz processor Sep 13 00:09:16.138908 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:09:16.138923 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:09:16.138940 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 13 00:09:16.138957 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 13 00:09:16.138973 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:09:16.139023 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 13 00:09:16.139041 kernel: Using GB pages for direct mapping Sep 13 00:09:16.139058 kernel: Secure boot disabled Sep 13 00:09:16.139075 kernel: ACPI: Early table checksum verification disabled Sep 13 00:09:16.139092 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 13 00:09:16.139109 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 13 00:09:16.139125 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 13 00:09:16.139148 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 13 00:09:16.139168 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 13 00:09:16.139184 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 13 00:09:16.139202 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 13 00:09:16.139220 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 13 00:09:16.139238 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 13 00:09:16.139255 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 13 00:09:16.139278 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 13 00:09:16.139295 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 13 00:09:16.139310 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 13 00:09:16.139327 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 13 00:09:16.139344 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 13 00:09:16.139362 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 13 00:09:16.139392 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 13 00:09:16.139410 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 13 00:09:16.139427 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 13 00:09:16.139450 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 13 00:09:16.139467 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:09:16.139484 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:09:16.139503 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 00:09:16.139520 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 13 00:09:16.139538 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 13 00:09:16.139556 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Sep 13 00:09:16.139574 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Sep 13 00:09:16.139591 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Sep 13 00:09:16.139614 kernel: Zone ranges: Sep 13 00:09:16.139632 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:09:16.139649 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 00:09:16.139666 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 13 00:09:16.139684 kernel: Movable zone start for each node Sep 13 00:09:16.139702 kernel: Early memory node ranges Sep 13 00:09:16.139719 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 13 00:09:16.139736 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 13 00:09:16.139754 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Sep 13 00:09:16.139776 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 13 00:09:16.139793 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 13 00:09:16.139811 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 13 00:09:16.139829 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:09:16.139846 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 13 00:09:16.139879 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 13 00:09:16.139905 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 13 00:09:16.139921 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 13 00:09:16.139939 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 13 00:09:16.139962 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:09:16.139980 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:09:16.139998 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:09:16.140016 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:09:16.140031 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:09:16.140050 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:09:16.140068 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:09:16.140086 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:09:16.140104 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 13 00:09:16.140126 kernel: Booting paravirtualized kernel on KVM Sep 13 00:09:16.140144 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:09:16.140163 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:09:16.140181 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:09:16.140199 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:09:16.140217 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:09:16.140233 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:09:16.140250 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:09:16.140270 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:16.140293 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:09:16.140311 kernel: random: crng init done Sep 13 00:09:16.140329 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 13 00:09:16.140347 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:09:16.140365 kernel: Fallback order for Node 0: 0 Sep 13 00:09:16.140393 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Sep 13 00:09:16.140411 kernel: Policy zone: Normal Sep 13 00:09:16.140429 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:09:16.140451 kernel: software IO TLB: area num 2. Sep 13 00:09:16.140470 kernel: Memory: 7513400K/7860584K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 346924K reserved, 0K cma-reserved) Sep 13 00:09:16.140487 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:09:16.140506 kernel: Kernel/User page tables isolation: enabled Sep 13 00:09:16.140524 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:09:16.140542 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:09:16.140560 kernel: Dynamic Preempt: voluntary Sep 13 00:09:16.140579 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:09:16.140599 kernel: rcu: RCU event tracing is enabled. Sep 13 00:09:16.140636 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:09:16.140656 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:09:16.140676 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:09:16.140699 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:09:16.140719 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:09:16.140738 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:09:16.140756 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:09:16.140775 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:09:16.140795 kernel: Console: colour dummy device 80x25 Sep 13 00:09:16.140818 kernel: printk: console [ttyS0] enabled Sep 13 00:09:16.140838 kernel: ACPI: Core revision 20230628 Sep 13 00:09:16.140895 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:09:16.140914 kernel: x2apic enabled Sep 13 00:09:16.140933 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:09:16.140952 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 13 00:09:16.140972 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 13 00:09:16.140991 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 13 00:09:16.141016 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 13 00:09:16.141035 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 13 00:09:16.141055 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:09:16.141074 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 13 00:09:16.141094 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 13 00:09:16.141112 kernel: Spectre V2 : Mitigation: IBRS Sep 13 00:09:16.141131 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:09:16.141149 kernel: RETBleed: Mitigation: IBRS Sep 13 00:09:16.141168 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:09:16.141193 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 13 00:09:16.141211 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:09:16.141230 kernel: MDS: Mitigation: Clear CPU buffers Sep 13 00:09:16.141249 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:09:16.141267 kernel: active return thunk: its_return_thunk Sep 13 00:09:16.141287 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:09:16.141307 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:09:16.141326 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:09:16.141346 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:09:16.141370 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:09:16.141397 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:09:16.141417 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:09:16.141436 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:09:16.141456 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:09:16.141475 kernel: landlock: Up and running. Sep 13 00:09:16.141495 kernel: SELinux: Initializing. Sep 13 00:09:16.141515 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:09:16.141535 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:09:16.141559 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 13 00:09:16.141578 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:16.141598 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:16.141617 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:16.141636 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 13 00:09:16.141656 kernel: signal: max sigframe size: 1776 Sep 13 00:09:16.141675 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:09:16.141695 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:09:16.141714 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:09:16.141737 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:09:16.141757 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:09:16.141776 kernel: .... node #0, CPUs: #1 Sep 13 00:09:16.141797 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 13 00:09:16.141816 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:09:16.141835 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:09:16.141872 kernel: smpboot: Max logical packages: 1 Sep 13 00:09:16.141898 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 13 00:09:16.141919 kernel: devtmpfs: initialized Sep 13 00:09:16.141946 kernel: x86/mm: Memory block size: 128MB Sep 13 00:09:16.141966 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 13 00:09:16.141986 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:09:16.142004 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:09:16.142020 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:09:16.142039 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:09:16.142058 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:09:16.142077 kernel: audit: type=2000 audit(1757722154.205:1): state=initialized audit_enabled=0 res=1 Sep 13 00:09:16.142100 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:09:16.142120 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:09:16.142138 kernel: cpuidle: using governor menu Sep 13 00:09:16.142157 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:09:16.142176 kernel: dca service started, version 1.12.1 Sep 13 00:09:16.142195 kernel: PCI: Using configuration type 1 for base access Sep 13 00:09:16.142214 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:09:16.142233 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:09:16.142252 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:09:16.142276 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:09:16.142295 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:09:16.142314 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:09:16.142332 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:09:16.142350 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:09:16.142370 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 13 00:09:16.142398 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:09:16.142417 kernel: ACPI: Interpreter enabled Sep 13 00:09:16.142437 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:09:16.142460 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:09:16.142478 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:09:16.142497 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 13 00:09:16.142515 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 13 00:09:16.142534 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:09:16.144322 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:09:16.144614 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 13 00:09:16.144816 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 13 00:09:16.144842 kernel: PCI host bridge to bus 0000:00 Sep 13 00:09:16.145062 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:09:16.145243 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:09:16.145422 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:09:16.145592 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 13 00:09:16.145763 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:09:16.146183 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:09:16.146441 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Sep 13 00:09:16.146654 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 13 00:09:16.146907 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 13 00:09:16.147138 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Sep 13 00:09:16.147365 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Sep 13 00:09:16.147569 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Sep 13 00:09:16.147772 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:09:16.148054 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Sep 13 00:09:16.148250 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Sep 13 00:09:16.148460 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:09:16.148652 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 13 00:09:16.148845 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Sep 13 00:09:16.148965 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:09:16.148983 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:09:16.149001 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:09:16.149018 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:09:16.149035 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:09:16.149053 kernel: iommu: Default domain type: Translated Sep 13 00:09:16.149070 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:09:16.149088 kernel: efivars: Registered efivars operations Sep 13 00:09:16.149106 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:09:16.149128 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:09:16.149146 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 13 00:09:16.149163 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 13 00:09:16.149181 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 13 00:09:16.149198 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 13 00:09:16.149216 kernel: vgaarb: loaded Sep 13 00:09:16.149232 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:09:16.149249 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:09:16.149267 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:09:16.149290 kernel: pnp: PnP ACPI init Sep 13 00:09:16.149308 kernel: pnp: PnP ACPI: found 7 devices Sep 13 00:09:16.149327 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:09:16.149345 kernel: NET: Registered PF_INET protocol family Sep 13 00:09:16.149364 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:09:16.149393 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 13 00:09:16.149412 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:09:16.149431 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:09:16.149449 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 13 00:09:16.149472 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 13 00:09:16.149491 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:09:16.149510 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:09:16.149529 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:09:16.149547 kernel: NET: Registered PF_XDP protocol family Sep 13 00:09:16.149748 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:09:16.151141 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:09:16.151357 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:09:16.151557 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 13 00:09:16.151763 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:09:16.151791 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:09:16.151812 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 00:09:16.151829 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 13 00:09:16.151846 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:09:16.151891 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 13 00:09:16.151909 kernel: clocksource: Switched to clocksource tsc Sep 13 00:09:16.151933 kernel: Initialise system trusted keyrings Sep 13 00:09:16.151950 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 13 00:09:16.151967 kernel: Key type asymmetric registered Sep 13 00:09:16.151984 kernel: Asymmetric key parser 'x509' registered Sep 13 00:09:16.152001 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:09:16.152018 kernel: io scheduler mq-deadline registered Sep 13 00:09:16.152035 kernel: io scheduler kyber registered Sep 13 00:09:16.152053 kernel: io scheduler bfq registered Sep 13 00:09:16.152070 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:09:16.152093 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 13 00:09:16.152304 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 13 00:09:16.152331 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 13 00:09:16.152535 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 13 00:09:16.152561 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 13 00:09:16.152755 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 13 00:09:16.152782 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:09:16.152801 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:09:16.152821 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 00:09:16.152846 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 13 00:09:16.152900 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 13 00:09:16.153097 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 13 00:09:16.153125 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:09:16.153145 kernel: i8042: Warning: Keylock active Sep 13 00:09:16.153164 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:09:16.153184 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:09:16.153391 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 13 00:09:16.153581 kernel: rtc_cmos 00:00: registered as rtc0 Sep 13 00:09:16.153757 kernel: rtc_cmos 00:00: setting system clock to 2025-09-13T00:09:15 UTC (1757722155) Sep 13 00:09:16.156011 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 13 00:09:16.156049 kernel: intel_pstate: CPU model not supported Sep 13 00:09:16.156071 kernel: pstore: Using crash dump compression: deflate Sep 13 00:09:16.156091 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:09:16.156112 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:09:16.156131 kernel: Segment Routing with IPv6 Sep 13 00:09:16.156158 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:09:16.156178 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:09:16.156196 kernel: Key type dns_resolver registered Sep 13 00:09:16.156216 kernel: IPI shorthand broadcast: enabled Sep 13 00:09:16.156236 kernel: sched_clock: Marking stable (913005190, 161164027)->(1146557187, -72387970) Sep 13 00:09:16.156256 kernel: registered taskstats version 1 Sep 13 00:09:16.156276 kernel: Loading compiled-in X.509 certificates Sep 13 00:09:16.156295 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:09:16.156315 kernel: Key type .fscrypt registered Sep 13 00:09:16.156339 kernel: Key type fscrypt-provisioning registered Sep 13 00:09:16.156359 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:09:16.156387 kernel: ima: No architecture policies found Sep 13 00:09:16.156406 kernel: clk: Disabling unused clocks Sep 13 00:09:16.156426 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:09:16.156446 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:09:16.156466 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:09:16.156485 kernel: Run /init as init process Sep 13 00:09:16.156509 kernel: with arguments: Sep 13 00:09:16.156529 kernel: /init Sep 13 00:09:16.156549 kernel: with environment: Sep 13 00:09:16.156568 kernel: HOME=/ Sep 13 00:09:16.156588 kernel: TERM=linux Sep 13 00:09:16.156608 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:09:16.156627 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 13 00:09:16.156650 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:09:16.156679 systemd[1]: Detected virtualization google. Sep 13 00:09:16.156700 systemd[1]: Detected architecture x86-64. Sep 13 00:09:16.156721 systemd[1]: Running in initrd. Sep 13 00:09:16.156741 systemd[1]: No hostname configured, using default hostname. Sep 13 00:09:16.156759 systemd[1]: Hostname set to . Sep 13 00:09:16.156780 systemd[1]: Initializing machine ID from random generator. Sep 13 00:09:16.156801 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:09:16.156822 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:16.156847 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:16.156917 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:09:16.156938 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:09:16.156957 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:09:16.156976 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:09:16.156998 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:09:16.157039 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:09:16.157060 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:16.157081 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:16.157122 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:09:16.157149 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:09:16.157170 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:09:16.157190 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:09:16.157216 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:09:16.157237 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:09:16.157258 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:09:16.157279 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:09:16.157300 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:16.157320 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:16.157341 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:16.157361 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:09:16.157395 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:09:16.157417 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:09:16.157438 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:09:16.157459 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:09:16.157480 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:09:16.157500 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:09:16.157520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:16.157541 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:09:16.157603 systemd-journald[183]: Collecting audit messages is disabled. Sep 13 00:09:16.157654 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:16.157675 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:09:16.157703 systemd-journald[183]: Journal started Sep 13 00:09:16.157745 systemd-journald[183]: Runtime Journal (/run/log/journal/0bd6a0a300b14528ae6639b225f39c81) is 8.0M, max 148.7M, 140.7M free. Sep 13 00:09:16.160952 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:09:16.127838 systemd-modules-load[184]: Inserted module 'overlay' Sep 13 00:09:16.166871 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:09:16.179188 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:09:16.189725 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:16.200021 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:09:16.200064 kernel: Bridge firewalling registered Sep 13 00:09:16.196066 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:09:16.199216 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 13 00:09:16.209633 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:16.217650 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:16.232139 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:16.235783 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:09:16.255897 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:09:16.273167 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:16.277356 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:16.289258 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:09:16.293403 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:16.302468 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:09:16.332599 dracut-cmdline[218]: dracut-dracut-053 Sep 13 00:09:16.337800 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:16.337597 systemd-resolved[213]: Positive Trust Anchors: Sep 13 00:09:16.337616 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:09:16.337695 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:09:16.345672 systemd-resolved[213]: Defaulting to hostname 'linux'. Sep 13 00:09:16.347613 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:09:16.358370 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:16.444909 kernel: SCSI subsystem initialized Sep 13 00:09:16.457905 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:09:16.470896 kernel: iscsi: registered transport (tcp) Sep 13 00:09:16.497674 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:09:16.497770 kernel: QLogic iSCSI HBA Driver Sep 13 00:09:16.553532 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:09:16.560092 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:09:16.607933 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:09:16.608026 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:09:16.608054 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:09:16.655922 kernel: raid6: avx2x4 gen() 17874 MB/s Sep 13 00:09:16.672899 kernel: raid6: avx2x2 gen() 18021 MB/s Sep 13 00:09:16.691025 kernel: raid6: avx2x1 gen() 13823 MB/s Sep 13 00:09:16.691117 kernel: raid6: using algorithm avx2x2 gen() 18021 MB/s Sep 13 00:09:16.708930 kernel: raid6: .... xor() 17456 MB/s, rmw enabled Sep 13 00:09:16.709035 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:09:16.733904 kernel: xor: automatically using best checksumming function avx Sep 13 00:09:16.907894 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:09:16.922171 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:09:16.928121 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:16.974036 systemd-udevd[400]: Using default interface naming scheme 'v255'. Sep 13 00:09:16.981174 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:17.015102 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:09:17.035113 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Sep 13 00:09:17.072012 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:09:17.096141 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:09:17.207594 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:17.226113 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:09:17.279744 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:09:17.303344 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:09:17.361035 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:09:17.361088 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:09:17.316012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:17.393379 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 13 00:09:17.393457 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:09:17.393484 kernel: AES CTR mode by8 optimization enabled Sep 13 00:09:17.333242 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:09:17.357211 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:09:17.436935 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:09:17.437225 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:17.543123 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 13 00:09:17.543520 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 13 00:09:17.543757 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 13 00:09:17.544022 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 13 00:09:17.544248 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:09:17.544484 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:09:17.544520 kernel: GPT:17805311 != 25165823 Sep 13 00:09:17.544543 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:09:17.460985 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:17.569023 kernel: GPT:17805311 != 25165823 Sep 13 00:09:17.569063 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:09:17.569093 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:17.569128 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 13 00:09:17.483167 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:17.483568 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:17.507055 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:17.616451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:17.640913 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (446) Sep 13 00:09:17.653896 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (459) Sep 13 00:09:17.660812 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:09:17.693513 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:17.708465 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 13 00:09:17.725313 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 13 00:09:17.746826 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 13 00:09:17.778309 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 13 00:09:17.778593 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 13 00:09:17.806182 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:09:17.855264 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:17.869513 disk-uuid[540]: Primary Header is updated. Sep 13 00:09:17.869513 disk-uuid[540]: Secondary Entries is updated. Sep 13 00:09:17.869513 disk-uuid[540]: Secondary Header is updated. Sep 13 00:09:17.892618 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:17.928105 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:17.928146 kernel: GPT:disk_guids don't match. Sep 13 00:09:17.928173 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:09:17.928199 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:17.947886 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:18.940040 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:18.940139 disk-uuid[541]: The operation has completed successfully. Sep 13 00:09:19.021955 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:09:19.022169 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:09:19.047069 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:09:19.082263 sh[566]: Success Sep 13 00:09:19.106883 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:09:19.193522 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:09:19.201112 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:09:19.224535 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:09:19.278720 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:09:19.278817 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:19.278844 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:09:19.294984 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:09:19.295059 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:09:19.338916 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:09:19.347200 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:09:19.348194 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:09:19.353258 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:09:19.430015 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:19.430051 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:19.430068 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:09:19.375077 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:09:19.452784 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:09:19.452838 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:09:19.459061 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:09:19.476053 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:19.490514 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:09:19.508330 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:09:19.603250 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:09:19.610128 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:09:19.680447 systemd-networkd[749]: lo: Link UP Sep 13 00:09:19.680460 systemd-networkd[749]: lo: Gained carrier Sep 13 00:09:19.686066 systemd-networkd[749]: Enumeration completed Sep 13 00:09:19.686712 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:19.686719 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:09:19.686925 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:09:19.702392 systemd-networkd[749]: eth0: Link UP Sep 13 00:09:19.746497 ignition[667]: Ignition 2.19.0 Sep 13 00:09:19.702400 systemd-networkd[749]: eth0: Gained carrier Sep 13 00:09:19.746506 ignition[667]: Stage: fetch-offline Sep 13 00:09:19.702418 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:19.746552 ignition[667]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:19.720037 systemd-networkd[749]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c.c.flatcar-212911.internal' to 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c' Sep 13 00:09:19.746563 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:09:19.720061 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.50/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 13 00:09:19.746684 ignition[667]: parsed url from cmdline: "" Sep 13 00:09:19.743188 systemd[1]: Reached target network.target - Network. Sep 13 00:09:19.746691 ignition[667]: no config URL provided Sep 13 00:09:19.752407 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:09:19.746699 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:09:19.780125 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:09:19.746710 ignition[667]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:09:19.832378 unknown[757]: fetched base config from "system" Sep 13 00:09:19.746718 ignition[667]: failed to fetch config: resource requires networking Sep 13 00:09:19.832393 unknown[757]: fetched base config from "system" Sep 13 00:09:19.747215 ignition[667]: Ignition finished successfully Sep 13 00:09:19.832407 unknown[757]: fetched user config from "gcp" Sep 13 00:09:19.822209 ignition[757]: Ignition 2.19.0 Sep 13 00:09:19.835081 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:09:19.822221 ignition[757]: Stage: fetch Sep 13 00:09:19.854235 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:09:19.822462 ignition[757]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:19.927716 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:09:19.822475 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:09:19.946090 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:09:19.822607 ignition[757]: parsed url from cmdline: "" Sep 13 00:09:19.975654 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:09:19.822615 ignition[757]: no config URL provided Sep 13 00:09:20.003710 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:09:19.822624 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:09:20.024222 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:09:19.822636 ignition[757]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:09:20.047215 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:09:19.822663 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 13 00:09:20.069215 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:09:19.826615 ignition[757]: GET result: OK Sep 13 00:09:20.096174 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:09:19.826721 ignition[757]: parsing config with SHA512: e6651b9a810d09317e5ddeea680dcd48caa8ac3e30fc1fd52feaadf136cb76cca313ce9d7c0adfa32d062f603fed01c4ffcf64aebe8472d03acfd403cab79a76 Sep 13 00:09:20.111086 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:09:19.833059 ignition[757]: fetch: fetch complete Sep 13 00:09:19.833069 ignition[757]: fetch: fetch passed Sep 13 00:09:19.833134 ignition[757]: Ignition finished successfully Sep 13 00:09:19.925149 ignition[763]: Ignition 2.19.0 Sep 13 00:09:19.925158 ignition[763]: Stage: kargs Sep 13 00:09:19.925398 ignition[763]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:19.925411 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:09:19.926457 ignition[763]: kargs: kargs passed Sep 13 00:09:19.926540 ignition[763]: Ignition finished successfully Sep 13 00:09:19.970938 ignition[768]: Ignition 2.19.0 Sep 13 00:09:19.970948 ignition[768]: Stage: disks Sep 13 00:09:19.971187 ignition[768]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:19.971200 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:09:19.972809 ignition[768]: disks: disks passed Sep 13 00:09:19.972910 ignition[768]: Ignition finished successfully Sep 13 00:09:20.172923 systemd-fsck[777]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 00:09:20.320026 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:09:20.326081 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:09:20.465926 kernel: EXT4-fs (sda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:09:20.466649 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:09:20.467642 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:09:20.490059 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:09:20.523062 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:09:20.564176 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (785) Sep 13 00:09:20.564225 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:20.564251 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:20.564275 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:09:20.532647 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:09:20.595933 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:09:20.595999 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:09:20.532719 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:09:20.532752 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:09:20.603078 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:09:20.626995 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:09:20.656159 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:09:20.801521 initrd-setup-root[809]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:09:20.813014 initrd-setup-root[816]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:09:20.823793 initrd-setup-root[823]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:09:20.834993 initrd-setup-root[830]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:09:20.984674 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:09:20.990063 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:09:21.010136 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:09:21.047880 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:21.049594 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:09:21.073934 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:09:21.090031 ignition[898]: INFO : Ignition 2.19.0 Sep 13 00:09:21.090031 ignition[898]: INFO : Stage: mount Sep 13 00:09:21.090031 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:21.090031 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:09:21.090031 ignition[898]: INFO : mount: mount passed Sep 13 00:09:21.090031 ignition[898]: INFO : Ignition finished successfully Sep 13 00:09:21.092089 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:09:21.115110 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:09:21.389145 systemd-networkd[749]: eth0: Gained IPv6LL Sep 13 00:09:21.472195 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:09:21.520910 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (909) Sep 13 00:09:21.538627 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:21.538723 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:21.538749 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:09:21.561430 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:09:21.561528 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:09:21.565067 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:09:21.604926 ignition[926]: INFO : Ignition 2.19.0 Sep 13 00:09:21.604926 ignition[926]: INFO : Stage: files Sep 13 00:09:21.619009 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:21.619009 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:09:21.619009 ignition[926]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:09:21.619009 ignition[926]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:09:21.619009 ignition[926]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:09:21.619009 ignition[926]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:09:21.619009 ignition[926]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:09:21.619009 ignition[926]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:09:21.619009 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:09:21.619009 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 00:09:21.614491 unknown[926]: wrote ssh authorized keys file for user: core Sep 13 00:09:21.757878 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:09:22.261883 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:22.279039 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:09:22.678498 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:09:23.454955 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:23.454955 ignition[926]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:09:23.494074 ignition[926]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:09:23.494074 ignition[926]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:09:23.494074 ignition[926]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:09:23.494074 ignition[926]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:09:23.494074 ignition[926]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:09:23.494074 ignition[926]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:09:23.494074 ignition[926]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:09:23.494074 ignition[926]: INFO : files: files passed Sep 13 00:09:23.494074 ignition[926]: INFO : Ignition finished successfully Sep 13 00:09:23.462542 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:09:23.481150 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:09:23.527160 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:09:23.538653 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:09:23.712081 initrd-setup-root-after-ignition[954]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:23.712081 initrd-setup-root-after-ignition[954]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:23.538785 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:09:23.748105 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:23.595530 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:09:23.622402 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:09:23.654136 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:09:23.753277 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:09:23.753439 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:09:23.763441 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:09:23.794233 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:09:23.814293 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:09:23.821264 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:09:23.912626 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:09:23.938144 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:09:23.984446 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:23.996403 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:24.006477 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:09:24.042220 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:09:24.042625 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:09:24.071308 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:09:24.071679 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:09:24.088536 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:09:24.104443 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:09:24.133353 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:09:24.143428 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:09:24.161432 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:09:24.200256 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:09:24.200686 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:09:24.217433 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:09:24.234423 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:09:24.234633 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:09:24.275317 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:24.285280 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:24.306282 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:09:24.306451 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:24.328202 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:09:24.328445 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:09:24.359323 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:09:24.359563 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:09:24.379391 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:09:24.379630 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:09:24.449136 ignition[979]: INFO : Ignition 2.19.0 Sep 13 00:09:24.449136 ignition[979]: INFO : Stage: umount Sep 13 00:09:24.449136 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:24.449136 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:09:24.449136 ignition[979]: INFO : umount: umount passed Sep 13 00:09:24.449136 ignition[979]: INFO : Ignition finished successfully Sep 13 00:09:24.406203 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:09:24.446207 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:09:24.457090 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:09:24.457530 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:24.528347 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:09:24.528535 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:09:24.561459 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:09:24.562473 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:09:24.562596 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:09:24.568061 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:09:24.568193 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:09:24.586628 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:09:24.586763 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:09:24.603567 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:09:24.603639 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:09:24.629247 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:09:24.629330 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:09:24.655248 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:09:24.655336 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:09:24.673339 systemd[1]: Stopped target network.target - Network. Sep 13 00:09:24.689146 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:09:24.689266 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:09:24.697330 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:09:24.717286 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:09:24.720991 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:24.733253 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:09:24.759145 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:09:24.767312 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:09:24.767376 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:09:24.800243 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:09:24.800319 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:09:24.808268 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:09:24.808348 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:09:24.840240 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:09:24.840322 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:09:24.860229 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:09:24.860312 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:09:24.868556 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:09:24.873973 systemd-networkd[749]: eth0: DHCPv6 lease lost Sep 13 00:09:24.897285 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:09:24.920653 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:09:24.920798 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:09:24.940092 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:09:24.940387 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:09:24.958213 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:09:24.958272 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:24.972001 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:09:25.025036 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:09:25.025260 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:09:25.046255 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:09:25.046336 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:25.066278 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:09:25.066365 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:25.076322 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:09:25.076401 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:25.115404 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:25.134783 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:09:25.135078 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:25.160662 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:09:25.160788 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:09:25.181122 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:09:25.181225 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:25.199248 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:09:25.199319 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:25.209311 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:09:25.209394 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:09:25.261040 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:09:25.261258 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:09:25.289335 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:09:25.567037 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 13 00:09:25.289434 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:25.334089 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:09:25.358045 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:09:25.358186 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:25.380137 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:09:25.380231 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:09:25.399137 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:09:25.399238 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:25.421159 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:25.421258 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:25.441670 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:09:25.441838 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:09:25.460814 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:09:25.486140 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:09:25.521505 systemd[1]: Switching root. Sep 13 00:09:25.724085 systemd-journald[183]: Journal stopped Sep 13 00:09:28.404488 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:09:28.404552 kernel: SELinux: policy capability open_perms=1 Sep 13 00:09:28.404575 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:09:28.404592 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:09:28.404610 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:09:28.404627 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:09:28.404648 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:09:28.404672 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:09:28.404693 kernel: audit: type=1403 audit(1757722166.205:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:09:28.404723 systemd[1]: Successfully loaded SELinux policy in 92.097ms. Sep 13 00:09:28.404746 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.340ms. Sep 13 00:09:28.404775 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:09:28.404797 systemd[1]: Detected virtualization google. Sep 13 00:09:28.404818 systemd[1]: Detected architecture x86-64. Sep 13 00:09:28.404848 systemd[1]: Detected first boot. Sep 13 00:09:28.404888 systemd[1]: Initializing machine ID from random generator. Sep 13 00:09:28.404910 zram_generator::config[1020]: No configuration found. Sep 13 00:09:28.404933 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:09:28.404955 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:09:28.404979 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:09:28.405000 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:09:28.405024 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:09:28.405125 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:09:28.405147 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:09:28.405170 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:09:28.405191 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:09:28.405218 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:09:28.405238 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:09:28.405257 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:09:28.405279 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:28.405304 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:28.405328 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:09:28.405350 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:09:28.405372 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:09:28.405405 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:09:28.405429 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:09:28.405453 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:28.405474 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:09:28.405495 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:09:28.405523 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:09:28.405551 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:09:28.405575 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:28.405599 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:09:28.405627 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:09:28.405650 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:09:28.405674 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:09:28.405693 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:09:28.405713 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:28.405733 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:28.405753 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:28.405785 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:09:28.405801 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:09:28.405815 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:09:28.405829 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:09:28.405844 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:28.405889 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:09:28.405914 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:09:28.405928 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:09:28.405943 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:09:28.405957 systemd[1]: Reached target machines.target - Containers. Sep 13 00:09:28.405971 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:09:28.405985 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:09:28.405999 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:09:28.406017 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:09:28.406031 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:09:28.406045 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:09:28.406058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:09:28.406072 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:09:28.406086 kernel: fuse: init (API version 7.39) Sep 13 00:09:28.406098 kernel: ACPI: bus type drm_connector registered Sep 13 00:09:28.406111 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:09:28.406128 kernel: loop: module loaded Sep 13 00:09:28.406143 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:09:28.406157 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:09:28.406171 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:09:28.406185 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:09:28.406198 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:09:28.406212 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:09:28.406226 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:09:28.406239 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:09:28.406291 systemd-journald[1107]: Collecting audit messages is disabled. Sep 13 00:09:28.406322 systemd-journald[1107]: Journal started Sep 13 00:09:28.406359 systemd-journald[1107]: Runtime Journal (/run/log/journal/85de2dbfa1664245a1903402ff81cfb1) is 8.0M, max 148.7M, 140.7M free. Sep 13 00:09:27.147285 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:09:27.174011 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:09:27.174610 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:09:28.437019 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:09:28.468894 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:09:28.491795 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:09:28.491954 systemd[1]: Stopped verity-setup.service. Sep 13 00:09:28.517883 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:28.527920 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:09:28.538520 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:09:28.549323 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:09:28.559315 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:09:28.569320 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:09:28.579311 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:09:28.589317 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:09:28.599424 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:09:28.611475 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:28.623437 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:09:28.623730 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:09:28.635543 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:09:28.635819 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:09:28.647591 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:09:28.647885 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:09:28.658510 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:09:28.658804 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:09:28.670458 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:09:28.670713 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:09:28.681452 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:09:28.681706 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:09:28.692461 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:28.702480 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:09:28.714484 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:09:28.726452 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:28.751544 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:09:28.767032 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:09:28.792972 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:09:28.803095 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:09:28.803354 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:09:28.815053 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:09:28.838203 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:09:28.855187 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:09:28.865280 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:09:28.874529 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:09:28.890974 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:09:28.903098 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:09:28.911194 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:09:28.921073 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:09:28.935685 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:09:28.942999 systemd-journald[1107]: Time spent on flushing to /var/log/journal/85de2dbfa1664245a1903402ff81cfb1 is 93.983ms for 933 entries. Sep 13 00:09:28.942999 systemd-journald[1107]: System Journal (/var/log/journal/85de2dbfa1664245a1903402ff81cfb1) is 8.0M, max 584.8M, 576.8M free. Sep 13 00:09:29.084179 systemd-journald[1107]: Received client request to flush runtime journal. Sep 13 00:09:29.084261 kernel: loop0: detected capacity change from 0 to 142488 Sep 13 00:09:28.959109 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:09:28.978323 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:09:28.998187 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:09:29.012552 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:09:29.024254 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:09:29.037689 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:09:29.055512 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:09:29.075512 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:09:29.098187 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:09:29.109689 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:09:29.121587 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:29.148474 systemd-tmpfiles[1139]: ACLs are not supported, ignoring. Sep 13 00:09:29.148509 systemd-tmpfiles[1139]: ACLs are not supported, ignoring. Sep 13 00:09:29.154064 udevadm[1140]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:09:29.168962 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:09:29.171124 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:09:29.184822 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:09:29.187732 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:09:29.203287 kernel: loop1: detected capacity change from 0 to 224512 Sep 13 00:09:29.214197 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:09:29.296997 kernel: loop2: detected capacity change from 0 to 140768 Sep 13 00:09:29.316606 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:09:29.338636 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:09:29.427945 kernel: loop3: detected capacity change from 0 to 54824 Sep 13 00:09:29.426495 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Sep 13 00:09:29.426531 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Sep 13 00:09:29.438538 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:29.520109 kernel: loop4: detected capacity change from 0 to 142488 Sep 13 00:09:29.579968 kernel: loop5: detected capacity change from 0 to 224512 Sep 13 00:09:29.627000 kernel: loop6: detected capacity change from 0 to 140768 Sep 13 00:09:29.691883 kernel: loop7: detected capacity change from 0 to 54824 Sep 13 00:09:29.716140 (sd-merge)[1166]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Sep 13 00:09:29.718910 (sd-merge)[1166]: Merged extensions into '/usr'. Sep 13 00:09:29.731235 systemd[1]: Reloading requested from client PID 1138 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:09:29.731447 systemd[1]: Reloading... Sep 13 00:09:29.915894 zram_generator::config[1190]: No configuration found. Sep 13 00:09:30.173367 ldconfig[1133]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:09:30.197691 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:09:30.306259 systemd[1]: Reloading finished in 573 ms. Sep 13 00:09:30.338393 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:09:30.349846 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:09:30.379244 systemd[1]: Starting ensure-sysext.service... Sep 13 00:09:30.394195 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:09:30.415001 systemd[1]: Reloading requested from client PID 1232 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:09:30.415037 systemd[1]: Reloading... Sep 13 00:09:30.472683 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:09:30.474287 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:09:30.477635 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:09:30.478412 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Sep 13 00:09:30.478554 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Sep 13 00:09:30.489967 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:09:30.490263 systemd-tmpfiles[1233]: Skipping /boot Sep 13 00:09:30.515399 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:09:30.515422 systemd-tmpfiles[1233]: Skipping /boot Sep 13 00:09:30.565906 zram_generator::config[1259]: No configuration found. Sep 13 00:09:30.707795 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:09:30.773707 systemd[1]: Reloading finished in 357 ms. Sep 13 00:09:30.791463 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:09:30.809670 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:30.833289 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:09:30.852987 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:09:30.877041 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:09:30.898359 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:09:30.916332 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:30.936246 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:09:30.951499 augenrules[1322]: No rules Sep 13 00:09:30.954819 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:09:30.967133 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:30.969312 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:09:30.979225 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:09:30.995386 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:09:31.004424 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Sep 13 00:09:31.015745 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:09:31.027199 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:09:31.035934 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:09:31.046016 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:31.054427 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:09:31.067585 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:09:31.068536 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:09:31.080544 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:31.093928 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:09:31.105715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:09:31.106831 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:09:31.119464 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:09:31.120091 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:09:31.131633 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:09:31.187308 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:09:31.233719 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:31.235366 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:09:31.244275 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:09:31.264376 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:09:31.281265 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:09:31.295241 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:09:31.317053 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 13 00:09:31.326185 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:09:31.352027 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:09:31.362202 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:09:31.380264 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:09:31.390029 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:09:31.390261 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:31.395211 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:09:31.395966 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:09:31.409029 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:09:31.409326 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:09:31.417032 systemd-resolved[1314]: Positive Trust Anchors: Sep 13 00:09:31.417087 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:09:31.417157 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:09:31.419845 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:09:31.421320 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:09:31.434952 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:09:31.435776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:09:31.441316 systemd-resolved[1314]: Defaulting to hostname 'linux'. Sep 13 00:09:31.447966 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 13 00:09:31.458060 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:09:31.476960 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:09:31.495969 systemd[1]: Finished ensure-sysext.service. Sep 13 00:09:31.508973 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 00:09:31.513996 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 13 00:09:31.527883 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1357) Sep 13 00:09:31.548886 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:09:31.567371 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:09:31.611758 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:31.625926 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 13 00:09:31.640159 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Sep 13 00:09:31.651242 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:09:31.651531 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:09:31.661913 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Sep 13 00:09:31.669927 kernel: ACPI: button: Sleep Button [SLPF] Sep 13 00:09:31.699267 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:31.751883 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:09:31.767958 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:09:31.781803 systemd-networkd[1370]: lo: Link UP Sep 13 00:09:31.782085 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 13 00:09:31.783336 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Sep 13 00:09:31.784587 systemd-networkd[1370]: lo: Gained carrier Sep 13 00:09:31.794260 systemd-networkd[1370]: Enumeration completed Sep 13 00:09:31.799933 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:09:31.800219 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:09:31.801682 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:31.801696 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:09:31.804008 systemd[1]: Reached target network.target - Network. Sep 13 00:09:31.805239 systemd-networkd[1370]: eth0: Link UP Sep 13 00:09:31.808911 systemd-networkd[1370]: eth0: Gained carrier Sep 13 00:09:31.808956 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:31.811167 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:09:31.821400 systemd-networkd[1370]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c.c.flatcar-212911.internal' to 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c' Sep 13 00:09:31.821723 systemd-networkd[1370]: eth0: DHCPv4 address 10.128.0.50/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 13 00:09:31.848809 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:09:31.849627 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:09:31.857149 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:09:31.892951 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:09:31.904197 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:31.929281 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:09:31.941443 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:31.952088 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:09:31.962214 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:09:31.974159 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:09:31.986447 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:09:31.997278 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:09:32.009077 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:09:32.020071 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:09:32.020141 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:09:32.029067 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:09:32.040881 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:09:32.052931 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:09:32.074954 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:09:32.090175 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:09:32.103089 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:09:32.113380 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:09:32.114776 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:09:32.123062 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:09:32.132150 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:09:32.132441 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:09:32.146125 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:09:32.162188 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:09:32.180735 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:09:32.200054 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:09:32.223842 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:09:32.234031 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:09:32.238456 jq[1422]: false Sep 13 00:09:32.244121 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:09:32.262228 systemd[1]: Started ntpd.service - Network Time Service. Sep 13 00:09:32.282032 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:09:32.283843 coreos-metadata[1420]: Sep 13 00:09:32.283 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Sep 13 00:09:32.287498 coreos-metadata[1420]: Sep 13 00:09:32.287 INFO Fetch successful Sep 13 00:09:32.287498 coreos-metadata[1420]: Sep 13 00:09:32.287 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Sep 13 00:09:32.293884 coreos-metadata[1420]: Sep 13 00:09:32.293 INFO Fetch successful Sep 13 00:09:32.293884 coreos-metadata[1420]: Sep 13 00:09:32.293 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Sep 13 00:09:32.294495 coreos-metadata[1420]: Sep 13 00:09:32.294 INFO Fetch successful Sep 13 00:09:32.294495 coreos-metadata[1420]: Sep 13 00:09:32.294 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Sep 13 00:09:32.298136 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:09:32.305482 coreos-metadata[1420]: Sep 13 00:09:32.294 INFO Fetch successful Sep 13 00:09:32.316127 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:09:32.329291 extend-filesystems[1425]: Found loop4 Sep 13 00:09:32.329291 extend-filesystems[1425]: Found loop5 Sep 13 00:09:32.329291 extend-filesystems[1425]: Found loop6 Sep 13 00:09:32.329291 extend-filesystems[1425]: Found loop7 Sep 13 00:09:32.329291 extend-filesystems[1425]: Found sda Sep 13 00:09:32.329291 extend-filesystems[1425]: Found sda1 Sep 13 00:09:32.329291 extend-filesystems[1425]: Found sda2 Sep 13 00:09:32.329291 extend-filesystems[1425]: Found sda3 Sep 13 00:09:32.329291 extend-filesystems[1425]: Found usr Sep 13 00:09:32.329291 extend-filesystems[1425]: Found sda4 Sep 13 00:09:32.329291 extend-filesystems[1425]: Found sda6 Sep 13 00:09:32.329291 extend-filesystems[1425]: Found sda7 Sep 13 00:09:32.329291 extend-filesystems[1425]: Found sda9 Sep 13 00:09:32.329291 extend-filesystems[1425]: Checking size of /dev/sda9 Sep 13 00:09:32.537064 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 13 00:09:32.537139 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 13 00:09:32.537172 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1337) Sep 13 00:09:32.402720 ntpd[1427]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 21:58:26 UTC 2025 (1): Starting Sep 13 00:09:32.537740 extend-filesystems[1425]: Resized partition /dev/sda9 Sep 13 00:09:32.342160 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 21:58:26 UTC 2025 (1): Starting Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: ---------------------------------------------------- Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: ntp-4 is maintained by Network Time Foundation, Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: corporation. Support and training for ntp-4 are Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: available at https://www.nwtime.org/support Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: ---------------------------------------------------- Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: proto: precision = 0.110 usec (-23) Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: basedate set to 2025-08-31 Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: gps base set to 2025-08-31 (week 2382) Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: Listen and drop on 0 v6wildcard [::]:123 Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: Listen normally on 2 lo 127.0.0.1:123 Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: Listen normally on 3 eth0 10.128.0.50:123 Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: Listen normally on 4 lo [::1]:123 Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: bind(21) AF_INET6 fe80::4001:aff:fe80:32%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:32%2#123 Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: failed to init interface for address fe80::4001:aff:fe80:32%2 Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: Listening on routing socket on fd #21 for interface updates Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:09:32.547373 ntpd[1427]: 13 Sep 00:09:32 ntpd[1427]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:09:32.402755 ntpd[1427]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 13 00:09:32.550385 extend-filesystems[1448]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:09:32.550385 extend-filesystems[1448]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 13 00:09:32.550385 extend-filesystems[1448]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 13 00:09:32.550385 extend-filesystems[1448]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 13 00:09:32.354673 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 13 00:09:32.402771 ntpd[1427]: ---------------------------------------------------- Sep 13 00:09:32.602269 extend-filesystems[1425]: Resized filesystem in /dev/sda9 Sep 13 00:09:32.355553 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:09:32.402786 ntpd[1427]: ntp-4 is maintained by Network Time Foundation, Sep 13 00:09:32.611240 jq[1450]: true Sep 13 00:09:32.363540 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:09:32.402800 ntpd[1427]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 13 00:09:32.611883 update_engine[1444]: I20250913 00:09:32.542726 1444 main.cc:92] Flatcar Update Engine starting Sep 13 00:09:32.611883 update_engine[1444]: I20250913 00:09:32.551106 1444 update_check_scheduler.cc:74] Next update check in 10m5s Sep 13 00:09:32.396719 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:09:32.402814 ntpd[1427]: corporation. Support and training for ntp-4 are Sep 13 00:09:32.427250 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:09:32.402829 ntpd[1427]: available at https://www.nwtime.org/support Sep 13 00:09:32.459285 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:09:32.402842 ntpd[1427]: ---------------------------------------------------- Sep 13 00:09:32.481364 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:09:32.405712 ntpd[1427]: proto: precision = 0.110 usec (-23) Sep 13 00:09:32.481684 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:09:32.408223 ntpd[1427]: basedate set to 2025-08-31 Sep 13 00:09:32.482286 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:09:32.408249 ntpd[1427]: gps base set to 2025-08-31 (week 2382) Sep 13 00:09:32.482545 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:09:32.419131 ntpd[1427]: Listen and drop on 0 v6wildcard [::]:123 Sep 13 00:09:32.501599 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:09:32.419212 ntpd[1427]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 13 00:09:32.501845 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:09:32.420599 dbus-daemon[1421]: [system] SELinux support is enabled Sep 13 00:09:32.536499 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:09:32.420811 ntpd[1427]: Listen normally on 2 lo 127.0.0.1:123 Sep 13 00:09:32.536792 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:09:32.424588 ntpd[1427]: Listen normally on 3 eth0 10.128.0.50:123 Sep 13 00:09:32.424662 ntpd[1427]: Listen normally on 4 lo [::1]:123 Sep 13 00:09:32.424742 ntpd[1427]: bind(21) AF_INET6 fe80::4001:aff:fe80:32%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:09:32.424773 ntpd[1427]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:32%2#123 Sep 13 00:09:32.424795 ntpd[1427]: failed to init interface for address fe80::4001:aff:fe80:32%2 Sep 13 00:09:32.424844 ntpd[1427]: Listening on routing socket on fd #21 for interface updates Sep 13 00:09:32.431807 ntpd[1427]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:09:32.431885 ntpd[1427]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:09:32.434452 dbus-daemon[1421]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1370 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 00:09:32.634407 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:09:32.645245 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:09:32.649326 jq[1457]: true Sep 13 00:09:32.664976 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:09:32.684407 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:09:32.684448 systemd-logind[1439]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 13 00:09:32.684479 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:09:32.685270 systemd-logind[1439]: New seat seat0. Sep 13 00:09:32.690555 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:09:32.691321 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:09:32.753344 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:09:32.753678 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:09:32.754014 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:09:32.777279 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 13 00:09:32.780893 tar[1456]: linux-amd64/LICENSE Sep 13 00:09:32.781956 tar[1456]: linux-amd64/helm Sep 13 00:09:32.787064 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:09:32.787372 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:09:32.808283 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:09:32.847783 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:09:32.913160 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:09:32.916822 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:09:32.938300 systemd[1]: Starting sshkeys.service... Sep 13 00:09:33.010008 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:09:33.032120 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:09:33.169015 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 00:09:33.169250 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 13 00:09:33.170362 dbus-daemon[1421]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1477 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 00:09:33.197402 systemd[1]: Starting polkit.service - Authorization Manager... Sep 13 00:09:33.222120 coreos-metadata[1495]: Sep 13 00:09:33.222 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 13 00:09:33.228095 coreos-metadata[1495]: Sep 13 00:09:33.226 INFO Fetch failed with 404: resource not found Sep 13 00:09:33.228095 coreos-metadata[1495]: Sep 13 00:09:33.226 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 13 00:09:33.228095 coreos-metadata[1495]: Sep 13 00:09:33.227 INFO Fetch successful Sep 13 00:09:33.228095 coreos-metadata[1495]: Sep 13 00:09:33.228 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 13 00:09:33.228534 coreos-metadata[1495]: Sep 13 00:09:33.228 INFO Fetch failed with 404: resource not found Sep 13 00:09:33.228615 coreos-metadata[1495]: Sep 13 00:09:33.228 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 13 00:09:33.229343 coreos-metadata[1495]: Sep 13 00:09:33.229 INFO Fetch failed with 404: resource not found Sep 13 00:09:33.229343 coreos-metadata[1495]: Sep 13 00:09:33.229 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 13 00:09:33.232880 coreos-metadata[1495]: Sep 13 00:09:33.229 INFO Fetch successful Sep 13 00:09:33.248206 unknown[1495]: wrote ssh authorized keys file for user: core Sep 13 00:09:33.306249 update-ssh-keys[1509]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:09:33.307811 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:09:33.324368 systemd[1]: Finished sshkeys.service. Sep 13 00:09:33.330001 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:09:33.336481 polkitd[1504]: Started polkitd version 121 Sep 13 00:09:33.361228 polkitd[1504]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 00:09:33.361346 polkitd[1504]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 00:09:33.365467 polkitd[1504]: Finished loading, compiling and executing 2 rules Sep 13 00:09:33.370280 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 00:09:33.370574 systemd[1]: Started polkit.service - Authorization Manager. Sep 13 00:09:33.373408 polkitd[1504]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 00:09:33.403558 ntpd[1427]: bind(24) AF_INET6 fe80::4001:aff:fe80:32%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:09:33.403623 ntpd[1427]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:32%2#123 Sep 13 00:09:33.404113 ntpd[1427]: 13 Sep 00:09:33 ntpd[1427]: bind(24) AF_INET6 fe80::4001:aff:fe80:32%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:09:33.404113 ntpd[1427]: 13 Sep 00:09:33 ntpd[1427]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:32%2#123 Sep 13 00:09:33.404113 ntpd[1427]: 13 Sep 00:09:33 ntpd[1427]: failed to init interface for address fe80::4001:aff:fe80:32%2 Sep 13 00:09:33.403644 ntpd[1427]: failed to init interface for address fe80::4001:aff:fe80:32%2 Sep 13 00:09:33.435302 systemd-hostnamed[1477]: Hostname set to (transient) Sep 13 00:09:33.436188 systemd-resolved[1314]: System hostname changed to 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c'. Sep 13 00:09:33.485062 systemd-networkd[1370]: eth0: Gained IPv6LL Sep 13 00:09:33.494444 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:09:33.505497 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:09:33.526194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:09:33.544291 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:09:33.558325 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Sep 13 00:09:33.600325 containerd[1464]: time="2025-09-13T00:09:33.600206306Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:09:33.623913 init.sh[1524]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 13 00:09:33.623913 init.sh[1524]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 13 00:09:33.626263 init.sh[1524]: + /usr/bin/google_instance_setup Sep 13 00:09:33.689622 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:09:33.766964 containerd[1464]: time="2025-09-13T00:09:33.766797044Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:33.775936 containerd[1464]: time="2025-09-13T00:09:33.775848877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:33.776122 containerd[1464]: time="2025-09-13T00:09:33.776101797Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:09:33.776210 containerd[1464]: time="2025-09-13T00:09:33.776194019Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:09:33.776492 containerd[1464]: time="2025-09-13T00:09:33.776469821Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:09:33.776591 containerd[1464]: time="2025-09-13T00:09:33.776573951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:33.776766 containerd[1464]: time="2025-09-13T00:09:33.776742346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:33.776888 containerd[1464]: time="2025-09-13T00:09:33.776867888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:33.779918 containerd[1464]: time="2025-09-13T00:09:33.778418812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:33.779918 containerd[1464]: time="2025-09-13T00:09:33.778463656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:33.779918 containerd[1464]: time="2025-09-13T00:09:33.778489125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:33.779918 containerd[1464]: time="2025-09-13T00:09:33.778507833Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:33.779918 containerd[1464]: time="2025-09-13T00:09:33.778631379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:33.780393 containerd[1464]: time="2025-09-13T00:09:33.780362013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:33.780712 containerd[1464]: time="2025-09-13T00:09:33.780681839Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:33.783465 containerd[1464]: time="2025-09-13T00:09:33.782928665Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:09:33.783465 containerd[1464]: time="2025-09-13T00:09:33.783158869Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:09:33.783465 containerd[1464]: time="2025-09-13T00:09:33.783236254Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:09:33.793336 containerd[1464]: time="2025-09-13T00:09:33.793270089Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:09:33.793494 containerd[1464]: time="2025-09-13T00:09:33.793395352Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:09:33.793494 containerd[1464]: time="2025-09-13T00:09:33.793434574Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:09:33.793494 containerd[1464]: time="2025-09-13T00:09:33.793460043Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:09:33.793494 containerd[1464]: time="2025-09-13T00:09:33.793485429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:09:33.796586 containerd[1464]: time="2025-09-13T00:09:33.793720881Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:09:33.796586 containerd[1464]: time="2025-09-13T00:09:33.794146854Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:09:33.796586 containerd[1464]: time="2025-09-13T00:09:33.794320474Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:09:33.796586 containerd[1464]: time="2025-09-13T00:09:33.794345514Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:09:33.796586 containerd[1464]: time="2025-09-13T00:09:33.794367478Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:09:33.796586 containerd[1464]: time="2025-09-13T00:09:33.794392619Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:09:33.796586 containerd[1464]: time="2025-09-13T00:09:33.794415863Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:09:33.796586 containerd[1464]: time="2025-09-13T00:09:33.794438520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:09:33.796586 containerd[1464]: time="2025-09-13T00:09:33.794461578Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:09:33.796586 containerd[1464]: time="2025-09-13T00:09:33.794486700Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:09:33.796586 containerd[1464]: time="2025-09-13T00:09:33.794509891Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:09:33.796586 containerd[1464]: time="2025-09-13T00:09:33.794531050Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:09:33.796586 containerd[1464]: time="2025-09-13T00:09:33.794553514Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:09:33.796586 containerd[1464]: time="2025-09-13T00:09:33.794583850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.797286 containerd[1464]: time="2025-09-13T00:09:33.794609472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.797286 containerd[1464]: time="2025-09-13T00:09:33.794630409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.797286 containerd[1464]: time="2025-09-13T00:09:33.794652267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.797286 containerd[1464]: time="2025-09-13T00:09:33.794672599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.797286 containerd[1464]: time="2025-09-13T00:09:33.794694369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.797286 containerd[1464]: time="2025-09-13T00:09:33.794714568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.797286 containerd[1464]: time="2025-09-13T00:09:33.794736866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.797286 containerd[1464]: time="2025-09-13T00:09:33.794776275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.797286 containerd[1464]: time="2025-09-13T00:09:33.794802178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.797286 containerd[1464]: time="2025-09-13T00:09:33.794822036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.797286 containerd[1464]: time="2025-09-13T00:09:33.794842321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.797286 containerd[1464]: time="2025-09-13T00:09:33.796946247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.797286 containerd[1464]: time="2025-09-13T00:09:33.797014454Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:09:33.797286 containerd[1464]: time="2025-09-13T00:09:33.797154899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.797924 containerd[1464]: time="2025-09-13T00:09:33.797900727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.797974 containerd[1464]: time="2025-09-13T00:09:33.797935616Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:09:33.798322 containerd[1464]: time="2025-09-13T00:09:33.798091132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:09:33.798322 containerd[1464]: time="2025-09-13T00:09:33.798293541Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:09:33.798494 containerd[1464]: time="2025-09-13T00:09:33.798361291Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:09:33.798494 containerd[1464]: time="2025-09-13T00:09:33.798387557Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:09:33.798494 containerd[1464]: time="2025-09-13T00:09:33.798405365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.798494 containerd[1464]: time="2025-09-13T00:09:33.798446903Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:09:33.798494 containerd[1464]: time="2025-09-13T00:09:33.798468094Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:09:33.798494 containerd[1464]: time="2025-09-13T00:09:33.798487019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:09:33.801909 containerd[1464]: time="2025-09-13T00:09:33.800332128Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:09:33.801909 containerd[1464]: time="2025-09-13T00:09:33.800496613Z" level=info msg="Connect containerd service" Sep 13 00:09:33.801909 containerd[1464]: time="2025-09-13T00:09:33.800959585Z" level=info msg="using legacy CRI server" Sep 13 00:09:33.801909 containerd[1464]: time="2025-09-13T00:09:33.800980844Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:09:33.802994 containerd[1464]: time="2025-09-13T00:09:33.801964437Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:09:33.806192 containerd[1464]: time="2025-09-13T00:09:33.804986654Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:09:33.806192 containerd[1464]: time="2025-09-13T00:09:33.805204654Z" level=info msg="Start subscribing containerd event" Sep 13 00:09:33.806192 containerd[1464]: time="2025-09-13T00:09:33.805307998Z" level=info msg="Start recovering state" Sep 13 00:09:33.806192 containerd[1464]: time="2025-09-13T00:09:33.805460702Z" level=info msg="Start event monitor" Sep 13 00:09:33.806192 containerd[1464]: time="2025-09-13T00:09:33.805506445Z" level=info msg="Start snapshots syncer" Sep 13 00:09:33.806192 containerd[1464]: time="2025-09-13T00:09:33.805521710Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:09:33.806192 containerd[1464]: time="2025-09-13T00:09:33.805533512Z" level=info msg="Start streaming server" Sep 13 00:09:33.806192 containerd[1464]: time="2025-09-13T00:09:33.805575262Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:09:33.806192 containerd[1464]: time="2025-09-13T00:09:33.805666297Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:09:33.805925 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:09:33.812313 containerd[1464]: time="2025-09-13T00:09:33.811326457Z" level=info msg="containerd successfully booted in 0.213961s" Sep 13 00:09:34.297834 sshd_keygen[1447]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:09:34.345512 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:09:34.367591 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:09:34.384919 systemd[1]: Started sshd@0-10.128.0.50:22-147.75.109.163:55494.service - OpenSSH per-connection server daemon (147.75.109.163:55494). Sep 13 00:09:34.410440 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:09:34.410759 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:09:34.431300 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:09:34.476936 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:09:34.500396 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:09:34.522365 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:09:34.532414 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:09:34.563474 tar[1456]: linux-amd64/README.md Sep 13 00:09:34.596134 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:09:34.688346 instance-setup[1529]: INFO Running google_set_multiqueue. Sep 13 00:09:34.710978 instance-setup[1529]: INFO Set channels for eth0 to 2. Sep 13 00:09:34.720124 instance-setup[1529]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Sep 13 00:09:34.723035 instance-setup[1529]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Sep 13 00:09:34.723109 instance-setup[1529]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Sep 13 00:09:34.725064 instance-setup[1529]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Sep 13 00:09:34.725393 instance-setup[1529]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Sep 13 00:09:34.728490 instance-setup[1529]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Sep 13 00:09:34.728932 instance-setup[1529]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Sep 13 00:09:34.730223 instance-setup[1529]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Sep 13 00:09:34.742402 instance-setup[1529]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 13 00:09:34.748915 instance-setup[1529]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 13 00:09:34.751474 instance-setup[1529]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 13 00:09:34.751539 instance-setup[1529]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 13 00:09:34.786332 init.sh[1524]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 13 00:09:34.865954 sshd[1546]: Accepted publickey for core from 147.75.109.163 port 55494 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:09:34.866649 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:34.893048 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:09:34.912736 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:09:34.932076 systemd-logind[1439]: New session 1 of user core. Sep 13 00:09:34.965699 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:09:34.988938 startup-script[1588]: INFO Starting startup scripts. Sep 13 00:09:34.990409 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:09:34.996633 startup-script[1588]: INFO No startup scripts found in metadata. Sep 13 00:09:34.996967 startup-script[1588]: INFO Finished running startup scripts. Sep 13 00:09:35.028760 init.sh[1524]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 13 00:09:35.028760 init.sh[1524]: + daemon_pids=() Sep 13 00:09:35.028760 init.sh[1524]: + for d in accounts clock_skew network Sep 13 00:09:35.028760 init.sh[1524]: + daemon_pids+=($!) Sep 13 00:09:35.028760 init.sh[1524]: + for d in accounts clock_skew network Sep 13 00:09:35.028760 init.sh[1524]: + daemon_pids+=($!) Sep 13 00:09:35.028760 init.sh[1524]: + for d in accounts clock_skew network Sep 13 00:09:35.028760 init.sh[1524]: + daemon_pids+=($!) Sep 13 00:09:35.028760 init.sh[1524]: + NOTIFY_SOCKET=/run/systemd/notify Sep 13 00:09:35.028760 init.sh[1524]: + /usr/bin/systemd-notify --ready Sep 13 00:09:35.029575 init.sh[1594]: + /usr/bin/google_accounts_daemon Sep 13 00:09:35.031506 init.sh[1595]: + /usr/bin/google_clock_skew_daemon Sep 13 00:09:35.032320 init.sh[1596]: + /usr/bin/google_network_daemon Sep 13 00:09:35.032486 (systemd)[1593]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:09:35.067046 systemd[1]: Started oem-gce.service - GCE Linux Agent. Sep 13 00:09:35.083888 init.sh[1524]: + wait -n 1594 1595 1596 Sep 13 00:09:35.287681 systemd[1593]: Queued start job for default target default.target. Sep 13 00:09:35.294765 systemd[1593]: Created slice app.slice - User Application Slice. Sep 13 00:09:35.294822 systemd[1593]: Reached target paths.target - Paths. Sep 13 00:09:35.294847 systemd[1593]: Reached target timers.target - Timers. Sep 13 00:09:35.297139 systemd[1593]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:09:35.340318 systemd[1593]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:09:35.340537 systemd[1593]: Reached target sockets.target - Sockets. Sep 13 00:09:35.340566 systemd[1593]: Reached target basic.target - Basic System. Sep 13 00:09:35.340650 systemd[1593]: Reached target default.target - Main User Target. Sep 13 00:09:35.340708 systemd[1593]: Startup finished in 291ms. Sep 13 00:09:35.341020 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:09:35.358130 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:09:35.547252 google-clock-skew[1595]: INFO Starting Google Clock Skew daemon. Sep 13 00:09:35.600073 google-clock-skew[1595]: INFO Clock drift token has changed: 0. Sep 13 00:09:35.615903 google-networking[1596]: INFO Starting Google Networking daemon. Sep 13 00:09:35.672385 systemd[1]: Started sshd@1-10.128.0.50:22-147.75.109.163:55506.service - OpenSSH per-connection server daemon (147.75.109.163:55506). Sep 13 00:09:35.765312 groupadd[1617]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 13 00:09:35.772682 groupadd[1617]: group added to /etc/gshadow: name=google-sudoers Sep 13 00:09:35.833665 groupadd[1617]: new group: name=google-sudoers, GID=1000 Sep 13 00:09:35.870057 google-accounts[1594]: INFO Starting Google Accounts daemon. Sep 13 00:09:35.884180 google-accounts[1594]: WARNING OS Login not installed. Sep 13 00:09:35.885962 google-accounts[1594]: INFO Creating a new user account for 0. Sep 13 00:09:35.894223 init.sh[1626]: useradd: invalid user name '0': use --badname to ignore Sep 13 00:09:35.895658 google-accounts[1594]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 13 00:09:36.104018 sshd[1616]: Accepted publickey for core from 147.75.109.163 port 55506 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:09:36.106713 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:36.115165 systemd-logind[1439]: New session 2 of user core. Sep 13 00:09:36.124215 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:09:36.202184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:09:36.214032 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:09:36.219553 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:09:36.225617 systemd[1]: Startup finished in 1.093s (kernel) + 10.420s (initrd) + 10.100s (userspace) = 21.614s. Sep 13 00:09:36.382369 sshd[1616]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:36.389113 systemd[1]: sshd@1-10.128.0.50:22-147.75.109.163:55506.service: Deactivated successfully. Sep 13 00:09:36.392609 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:09:36.394284 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:09:36.396510 systemd-logind[1439]: Removed session 2. Sep 13 00:09:36.403398 ntpd[1427]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:32%2]:123 Sep 13 00:09:36.404047 ntpd[1427]: 13 Sep 00:09:36 ntpd[1427]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:32%2]:123 Sep 13 00:09:36.451578 systemd[1]: Started sshd@2-10.128.0.50:22-147.75.109.163:55514.service - OpenSSH per-connection server daemon (147.75.109.163:55514). Sep 13 00:09:36.000192 systemd-resolved[1314]: Clock change detected. Flushing caches. Sep 13 00:09:36.020758 systemd-journald[1107]: Time jumped backwards, rotating. Sep 13 00:09:36.007635 google-clock-skew[1595]: INFO Synced system time with hardware clock. Sep 13 00:09:36.364815 sshd[1647]: Accepted publickey for core from 147.75.109.163 port 55514 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:09:36.367041 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:36.374923 systemd-logind[1439]: New session 3 of user core. Sep 13 00:09:36.382555 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:09:36.634800 sshd[1647]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:36.642965 systemd[1]: sshd@2-10.128.0.50:22-147.75.109.163:55514.service: Deactivated successfully. Sep 13 00:09:36.646157 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:09:36.647442 systemd-logind[1439]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:09:36.649551 systemd-logind[1439]: Removed session 3. Sep 13 00:09:36.711851 systemd[1]: Started sshd@3-10.128.0.50:22-147.75.109.163:55516.service - OpenSSH per-connection server daemon (147.75.109.163:55516). Sep 13 00:09:36.813367 kubelet[1634]: E0913 00:09:36.813187 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:09:36.816491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:09:36.816773 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:09:36.817297 systemd[1]: kubelet.service: Consumed 1.377s CPU time. Sep 13 00:09:37.088467 sshd[1655]: Accepted publickey for core from 147.75.109.163 port 55516 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:09:37.090670 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:37.097966 systemd-logind[1439]: New session 4 of user core. Sep 13 00:09:37.102554 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:09:37.364002 sshd[1655]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:37.368964 systemd[1]: sshd@3-10.128.0.50:22-147.75.109.163:55516.service: Deactivated successfully. Sep 13 00:09:37.371467 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:09:37.373283 systemd-logind[1439]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:09:37.374965 systemd-logind[1439]: Removed session 4. Sep 13 00:09:37.439715 systemd[1]: Started sshd@4-10.128.0.50:22-147.75.109.163:55520.service - OpenSSH per-connection server daemon (147.75.109.163:55520). Sep 13 00:09:37.816938 sshd[1664]: Accepted publickey for core from 147.75.109.163 port 55520 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:09:37.819722 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:37.827304 systemd-logind[1439]: New session 5 of user core. Sep 13 00:09:37.833532 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:09:38.064870 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:09:38.065440 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:09:38.082278 sudo[1667]: pam_unix(sudo:session): session closed for user root Sep 13 00:09:38.141153 sshd[1664]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:38.146039 systemd[1]: sshd@4-10.128.0.50:22-147.75.109.163:55520.service: Deactivated successfully. Sep 13 00:09:38.148491 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:09:38.150451 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:09:38.152163 systemd-logind[1439]: Removed session 5. Sep 13 00:09:38.208673 systemd[1]: Started sshd@5-10.128.0.50:22-147.75.109.163:55522.service - OpenSSH per-connection server daemon (147.75.109.163:55522). Sep 13 00:09:38.564170 sshd[1672]: Accepted publickey for core from 147.75.109.163 port 55522 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:09:38.566212 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:38.572828 systemd-logind[1439]: New session 6 of user core. Sep 13 00:09:38.588521 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:09:38.780760 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:09:38.781324 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:09:38.786697 sudo[1676]: pam_unix(sudo:session): session closed for user root Sep 13 00:09:38.801200 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:09:38.801757 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:09:38.821853 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:09:38.825312 auditctl[1679]: No rules Sep 13 00:09:38.825174 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:09:38.825487 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:09:38.829519 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:09:38.875783 augenrules[1697]: No rules Sep 13 00:09:38.876747 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:09:38.879215 sudo[1675]: pam_unix(sudo:session): session closed for user root Sep 13 00:09:38.935039 sshd[1672]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:38.939857 systemd[1]: sshd@5-10.128.0.50:22-147.75.109.163:55522.service: Deactivated successfully. Sep 13 00:09:38.942436 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:09:38.944328 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:09:38.945830 systemd-logind[1439]: Removed session 6. Sep 13 00:09:39.012670 systemd[1]: Started sshd@6-10.128.0.50:22-147.75.109.163:55532.service - OpenSSH per-connection server daemon (147.75.109.163:55532). Sep 13 00:09:39.400024 sshd[1705]: Accepted publickey for core from 147.75.109.163 port 55532 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:09:39.401999 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:39.408733 systemd-logind[1439]: New session 7 of user core. Sep 13 00:09:39.418566 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:09:39.624954 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:09:39.625499 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:09:40.079671 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:09:40.091967 (dockerd)[1724]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:09:40.554917 dockerd[1724]: time="2025-09-13T00:09:40.554729602Z" level=info msg="Starting up" Sep 13 00:09:40.841687 dockerd[1724]: time="2025-09-13T00:09:40.841266302Z" level=info msg="Loading containers: start." Sep 13 00:09:40.998529 kernel: Initializing XFRM netlink socket Sep 13 00:09:41.119471 systemd-networkd[1370]: docker0: Link UP Sep 13 00:09:41.141641 dockerd[1724]: time="2025-09-13T00:09:41.141576984Z" level=info msg="Loading containers: done." Sep 13 00:09:41.164316 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2127925943-merged.mount: Deactivated successfully. Sep 13 00:09:41.167898 dockerd[1724]: time="2025-09-13T00:09:41.167844074Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:09:41.168080 dockerd[1724]: time="2025-09-13T00:09:41.168002716Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:09:41.168283 dockerd[1724]: time="2025-09-13T00:09:41.168199118Z" level=info msg="Daemon has completed initialization" Sep 13 00:09:41.216050 dockerd[1724]: time="2025-09-13T00:09:41.215573460Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:09:41.215888 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:09:42.170721 containerd[1464]: time="2025-09-13T00:09:42.170656016Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 00:09:42.664927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003573882.mount: Deactivated successfully. Sep 13 00:09:44.398990 containerd[1464]: time="2025-09-13T00:09:44.398914007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:44.400781 containerd[1464]: time="2025-09-13T00:09:44.400687003Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28845499" Sep 13 00:09:44.403271 containerd[1464]: time="2025-09-13T00:09:44.401974170Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:44.405949 containerd[1464]: time="2025-09-13T00:09:44.405902180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:44.407442 containerd[1464]: time="2025-09-13T00:09:44.407399024Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.236686457s" Sep 13 00:09:44.407615 containerd[1464]: time="2025-09-13T00:09:44.407586648Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 13 00:09:44.408477 containerd[1464]: time="2025-09-13T00:09:44.408444785Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 00:09:45.932073 containerd[1464]: time="2025-09-13T00:09:45.931982576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:45.933502 containerd[1464]: time="2025-09-13T00:09:45.933415981Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24788961" Sep 13 00:09:45.935644 containerd[1464]: time="2025-09-13T00:09:45.935572758Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:45.941055 containerd[1464]: time="2025-09-13T00:09:45.939428684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:45.941055 containerd[1464]: time="2025-09-13T00:09:45.940877784Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.532110255s" Sep 13 00:09:45.941055 containerd[1464]: time="2025-09-13T00:09:45.940927086Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 13 00:09:45.942308 containerd[1464]: time="2025-09-13T00:09:45.942261133Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 00:09:47.067554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:09:47.076581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:09:47.368681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:09:47.377829 (kubelet)[1938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:09:47.432371 containerd[1464]: time="2025-09-13T00:09:47.432301170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:47.437569 containerd[1464]: time="2025-09-13T00:09:47.437378430Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19178205" Sep 13 00:09:47.437713 kubelet[1938]: E0913 00:09:47.437423 1938 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:09:47.438738 containerd[1464]: time="2025-09-13T00:09:47.438661517Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:47.444560 containerd[1464]: time="2025-09-13T00:09:47.444488152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:47.445855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:09:47.446653 containerd[1464]: time="2025-09-13T00:09:47.446608349Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.504131623s" Sep 13 00:09:47.446788 containerd[1464]: time="2025-09-13T00:09:47.446760793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 13 00:09:47.447723 containerd[1464]: time="2025-09-13T00:09:47.447639460Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 00:09:47.448542 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:09:48.823877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2313722820.mount: Deactivated successfully. Sep 13 00:09:49.543854 containerd[1464]: time="2025-09-13T00:09:49.543743349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:49.545413 containerd[1464]: time="2025-09-13T00:09:49.545182756Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30926101" Sep 13 00:09:49.548255 containerd[1464]: time="2025-09-13T00:09:49.546861902Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:49.550105 containerd[1464]: time="2025-09-13T00:09:49.550054745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:49.551260 containerd[1464]: time="2025-09-13T00:09:49.551117576Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.103197463s" Sep 13 00:09:49.551437 containerd[1464]: time="2025-09-13T00:09:49.551406496Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 00:09:49.552202 containerd[1464]: time="2025-09-13T00:09:49.552160087Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:09:49.994006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2125741583.mount: Deactivated successfully. Sep 13 00:09:51.307825 containerd[1464]: time="2025-09-13T00:09:51.307747361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:51.309588 containerd[1464]: time="2025-09-13T00:09:51.309406241Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Sep 13 00:09:51.310956 containerd[1464]: time="2025-09-13T00:09:51.310887755Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:51.316254 containerd[1464]: time="2025-09-13T00:09:51.314725977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:51.316254 containerd[1464]: time="2025-09-13T00:09:51.316159690Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.763958138s" Sep 13 00:09:51.316254 containerd[1464]: time="2025-09-13T00:09:51.316212571Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:09:51.317593 containerd[1464]: time="2025-09-13T00:09:51.317549386Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:09:51.701654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941329749.mount: Deactivated successfully. Sep 13 00:09:51.709897 containerd[1464]: time="2025-09-13T00:09:51.709821700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:51.711241 containerd[1464]: time="2025-09-13T00:09:51.711143179Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Sep 13 00:09:51.713758 containerd[1464]: time="2025-09-13T00:09:51.712540607Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:51.719082 containerd[1464]: time="2025-09-13T00:09:51.717640820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:51.719082 containerd[1464]: time="2025-09-13T00:09:51.718910578Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 401.316303ms" Sep 13 00:09:51.719082 containerd[1464]: time="2025-09-13T00:09:51.718956757Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:09:51.720257 containerd[1464]: time="2025-09-13T00:09:51.720189265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 00:09:52.166881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount306895312.mount: Deactivated successfully. Sep 13 00:09:54.559664 containerd[1464]: time="2025-09-13T00:09:54.559587251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:54.561463 containerd[1464]: time="2025-09-13T00:09:54.561396337Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57689565" Sep 13 00:09:54.563023 containerd[1464]: time="2025-09-13T00:09:54.562963287Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:54.566914 containerd[1464]: time="2025-09-13T00:09:54.566273690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:54.568011 containerd[1464]: time="2025-09-13T00:09:54.567962548Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.847553218s" Sep 13 00:09:54.568115 containerd[1464]: time="2025-09-13T00:09:54.568015678Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 13 00:09:57.696690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:09:57.708394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:09:58.082509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:09:58.088851 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:09:58.160818 kubelet[2092]: E0913 00:09:58.160753 2092 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:09:58.164936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:09:58.165197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:09:58.343210 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:09:58.351685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:09:58.409905 systemd[1]: Reloading requested from client PID 2106 ('systemctl') (unit session-7.scope)... Sep 13 00:09:58.409934 systemd[1]: Reloading... Sep 13 00:09:58.579269 zram_generator::config[2148]: No configuration found. Sep 13 00:09:58.732741 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:09:58.856038 systemd[1]: Reloading finished in 445 ms. Sep 13 00:09:58.922400 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:09:58.922548 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:09:58.922903 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:09:58.928729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:09:59.251520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:09:59.260972 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:09:59.325539 kubelet[2198]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:09:59.325539 kubelet[2198]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:09:59.325539 kubelet[2198]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:09:59.326122 kubelet[2198]: I0913 00:09:59.325755 2198 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:09:59.975781 kubelet[2198]: I0913 00:09:59.975717 2198 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:09:59.975781 kubelet[2198]: I0913 00:09:59.975757 2198 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:09:59.976194 kubelet[2198]: I0913 00:09:59.976153 2198 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:10:00.024784 kubelet[2198]: E0913 00:10:00.024735 2198 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:00.026326 kubelet[2198]: I0913 00:10:00.026078 2198 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:10:00.036486 kubelet[2198]: E0913 00:10:00.036419 2198 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:10:00.036486 kubelet[2198]: I0913 00:10:00.036473 2198 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:10:00.040163 kubelet[2198]: I0913 00:10:00.040127 2198 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:10:00.043516 kubelet[2198]: I0913 00:10:00.043429 2198 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:10:00.043782 kubelet[2198]: I0913 00:10:00.043500 2198 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:10:00.043987 kubelet[2198]: I0913 00:10:00.043786 2198 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:10:00.043987 kubelet[2198]: I0913 00:10:00.043807 2198 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:10:00.044085 kubelet[2198]: I0913 00:10:00.043999 2198 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:00.051439 kubelet[2198]: I0913 00:10:00.051287 2198 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:10:00.051439 kubelet[2198]: I0913 00:10:00.051367 2198 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:10:00.051439 kubelet[2198]: I0913 00:10:00.051411 2198 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:10:00.051439 kubelet[2198]: I0913 00:10:00.051430 2198 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:10:00.061506 kubelet[2198]: W0913 00:10:00.060358 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c&limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Sep 13 00:10:00.061506 kubelet[2198]: E0913 00:10:00.060462 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c&limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:00.061506 kubelet[2198]: W0913 00:10:00.060993 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Sep 13 00:10:00.061506 kubelet[2198]: E0913 00:10:00.061049 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:00.061841 kubelet[2198]: I0913 00:10:00.061677 2198 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:10:00.062266 kubelet[2198]: I0913 00:10:00.062211 2198 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:10:00.064705 kubelet[2198]: W0913 00:10:00.063537 2198 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:10:00.071276 kubelet[2198]: I0913 00:10:00.071219 2198 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:10:00.071493 kubelet[2198]: I0913 00:10:00.071479 2198 server.go:1287] "Started kubelet" Sep 13 00:10:00.082533 kubelet[2198]: I0913 00:10:00.082494 2198 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:10:00.086414 kubelet[2198]: E0913 00:10:00.083083 2198 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c.1864af07351376c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c,UID:ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c,},FirstTimestamp:2025-09-13 00:10:00.071435969 +0000 UTC m=+0.804641744,LastTimestamp:2025-09-13 00:10:00.071435969 +0000 UTC m=+0.804641744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c,}" Sep 13 00:10:00.093016 kubelet[2198]: I0913 00:10:00.092183 2198 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:10:00.093370 kubelet[2198]: I0913 00:10:00.093347 2198 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:10:00.093728 kubelet[2198]: I0913 00:10:00.093701 2198 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:10:00.093910 kubelet[2198]: E0913 00:10:00.093887 2198 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" not found" Sep 13 00:10:00.095335 kubelet[2198]: I0913 00:10:00.095218 2198 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:10:00.095668 kubelet[2198]: I0913 00:10:00.095640 2198 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:10:00.095979 kubelet[2198]: I0913 00:10:00.095942 2198 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:10:00.097129 kubelet[2198]: E0913 00:10:00.097016 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c?timeout=10s\": dial tcp 10.128.0.50:6443: connect: connection refused" interval="200ms" Sep 13 00:10:00.097445 kubelet[2198]: I0913 00:10:00.097414 2198 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:10:00.097556 kubelet[2198]: I0913 00:10:00.097527 2198 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:10:00.100127 kubelet[2198]: I0913 00:10:00.099862 2198 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:10:00.100127 kubelet[2198]: I0913 00:10:00.099935 2198 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:10:00.100714 kubelet[2198]: E0913 00:10:00.100680 2198 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:10:00.102873 kubelet[2198]: I0913 00:10:00.101413 2198 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:10:00.115990 kubelet[2198]: I0913 00:10:00.115926 2198 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:10:00.118047 kubelet[2198]: I0913 00:10:00.117996 2198 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:10:00.118047 kubelet[2198]: I0913 00:10:00.118032 2198 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:10:00.118254 kubelet[2198]: I0913 00:10:00.118060 2198 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:10:00.118254 kubelet[2198]: I0913 00:10:00.118073 2198 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:10:00.118254 kubelet[2198]: E0913 00:10:00.118159 2198 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:10:00.128082 kubelet[2198]: W0913 00:10:00.127818 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Sep 13 00:10:00.128082 kubelet[2198]: E0913 00:10:00.127908 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:00.128082 kubelet[2198]: W0913 00:10:00.128016 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Sep 13 00:10:00.128082 kubelet[2198]: E0913 00:10:00.128075 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:00.143371 kubelet[2198]: I0913 00:10:00.143317 2198 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:10:00.143371 kubelet[2198]: I0913 00:10:00.143356 2198 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:10:00.143610 kubelet[2198]: I0913 00:10:00.143390 2198 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:00.146351 kubelet[2198]: I0913 00:10:00.146277 2198 policy_none.go:49] "None policy: Start" Sep 13 00:10:00.146351 kubelet[2198]: I0913 00:10:00.146313 2198 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:10:00.146351 kubelet[2198]: I0913 00:10:00.146336 2198 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:10:00.155302 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:10:00.169992 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:10:00.181273 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:10:00.184887 kubelet[2198]: I0913 00:10:00.183922 2198 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:10:00.184887 kubelet[2198]: I0913 00:10:00.184257 2198 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:10:00.184887 kubelet[2198]: I0913 00:10:00.184278 2198 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:10:00.184887 kubelet[2198]: I0913 00:10:00.184597 2198 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:10:00.187766 kubelet[2198]: E0913 00:10:00.187736 2198 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:10:00.188111 kubelet[2198]: E0913 00:10:00.188061 2198 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" not found" Sep 13 00:10:00.242964 systemd[1]: Created slice kubepods-burstable-pod2d6e4b4d7a274760de70b02f5d807a86.slice - libcontainer container kubepods-burstable-pod2d6e4b4d7a274760de70b02f5d807a86.slice. Sep 13 00:10:00.254682 kubelet[2198]: E0913 00:10:00.254362 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" not found" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.260108 systemd[1]: Created slice kubepods-burstable-podb6b28e2eb03f4cc4168ca073ba1b2cd8.slice - libcontainer container kubepods-burstable-podb6b28e2eb03f4cc4168ca073ba1b2cd8.slice. Sep 13 00:10:00.263885 kubelet[2198]: E0913 00:10:00.263846 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" not found" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.267790 systemd[1]: Created slice kubepods-burstable-pod2afc54482e52fb9b5e8cf3c4f5a9d99f.slice - libcontainer container kubepods-burstable-pod2afc54482e52fb9b5e8cf3c4f5a9d99f.slice. Sep 13 00:10:00.270200 kubelet[2198]: E0913 00:10:00.270153 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" not found" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.292565 kubelet[2198]: I0913 00:10:00.292504 2198 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.295270 kubelet[2198]: E0913 00:10:00.293869 2198 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.50:6443/api/v1/nodes\": dial tcp 10.128.0.50:6443: connect: connection refused" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.297980 kubelet[2198]: E0913 00:10:00.297926 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c?timeout=10s\": dial tcp 10.128.0.50:6443: connect: connection refused" interval="400ms" Sep 13 00:10:00.301247 kubelet[2198]: I0913 00:10:00.301149 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6b28e2eb03f4cc4168ca073ba1b2cd8-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"b6b28e2eb03f4cc4168ca073ba1b2cd8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.301358 kubelet[2198]: I0913 00:10:00.301249 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b6b28e2eb03f4cc4168ca073ba1b2cd8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"b6b28e2eb03f4cc4168ca073ba1b2cd8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.301358 kubelet[2198]: I0913 00:10:00.301289 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6b28e2eb03f4cc4168ca073ba1b2cd8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"b6b28e2eb03f4cc4168ca073ba1b2cd8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.301358 kubelet[2198]: I0913 00:10:00.301324 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2afc54482e52fb9b5e8cf3c4f5a9d99f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"2afc54482e52fb9b5e8cf3c4f5a9d99f\") " pod="kube-system/kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.301358 kubelet[2198]: I0913 00:10:00.301353 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d6e4b4d7a274760de70b02f5d807a86-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"2d6e4b4d7a274760de70b02f5d807a86\") " pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.301647 kubelet[2198]: I0913 00:10:00.301381 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d6e4b4d7a274760de70b02f5d807a86-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"2d6e4b4d7a274760de70b02f5d807a86\") " pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.301647 kubelet[2198]: I0913 00:10:00.301411 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d6e4b4d7a274760de70b02f5d807a86-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"2d6e4b4d7a274760de70b02f5d807a86\") " pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.301647 kubelet[2198]: I0913 00:10:00.301439 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6b28e2eb03f4cc4168ca073ba1b2cd8-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"b6b28e2eb03f4cc4168ca073ba1b2cd8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.301647 kubelet[2198]: I0913 00:10:00.301468 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6b28e2eb03f4cc4168ca073ba1b2cd8-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"b6b28e2eb03f4cc4168ca073ba1b2cd8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.499773 kubelet[2198]: I0913 00:10:00.499624 2198 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.500363 kubelet[2198]: E0913 00:10:00.500147 2198 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.50:6443/api/v1/nodes\": dial tcp 10.128.0.50:6443: connect: connection refused" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.556396 containerd[1464]: time="2025-09-13T00:10:00.556328467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c,Uid:2d6e4b4d7a274760de70b02f5d807a86,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:00.569633 containerd[1464]: time="2025-09-13T00:10:00.569567205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c,Uid:b6b28e2eb03f4cc4168ca073ba1b2cd8,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:00.572150 containerd[1464]: time="2025-09-13T00:10:00.571966533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c,Uid:2afc54482e52fb9b5e8cf3c4f5a9d99f,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:00.699425 kubelet[2198]: E0913 00:10:00.699264 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c?timeout=10s\": dial tcp 10.128.0.50:6443: connect: connection refused" interval="800ms" Sep 13 00:10:00.906800 kubelet[2198]: I0913 00:10:00.906646 2198 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.907178 kubelet[2198]: E0913 00:10:00.907097 2198 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.50:6443/api/v1/nodes\": dial tcp 10.128.0.50:6443: connect: connection refused" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:00.960061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4053624438.mount: Deactivated successfully. Sep 13 00:10:00.968024 containerd[1464]: time="2025-09-13T00:10:00.967935511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:10:00.970862 containerd[1464]: time="2025-09-13T00:10:00.970637243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:10:00.972388 containerd[1464]: time="2025-09-13T00:10:00.972315324Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:10:00.973961 containerd[1464]: time="2025-09-13T00:10:00.973900028Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:10:00.975537 containerd[1464]: time="2025-09-13T00:10:00.975484916Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:10:00.978258 containerd[1464]: time="2025-09-13T00:10:00.977010904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Sep 13 00:10:00.981580 containerd[1464]: time="2025-09-13T00:10:00.981520497Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:10:00.986070 containerd[1464]: time="2025-09-13T00:10:00.986020321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:10:00.987439 containerd[1464]: time="2025-09-13T00:10:00.987379010Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 417.700999ms" Sep 13 00:10:00.991743 containerd[1464]: time="2025-09-13T00:10:00.991682144Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 419.510091ms" Sep 13 00:10:00.991979 containerd[1464]: time="2025-09-13T00:10:00.991918472Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 435.496719ms" Sep 13 00:10:01.062303 kubelet[2198]: W0913 00:10:01.061958 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Sep 13 00:10:01.062303 kubelet[2198]: E0913 00:10:01.062079 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:01.084013 kubelet[2198]: W0913 00:10:01.083872 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Sep 13 00:10:01.084013 kubelet[2198]: E0913 00:10:01.083972 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:01.190703 containerd[1464]: time="2025-09-13T00:10:01.190257891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:01.190993 containerd[1464]: time="2025-09-13T00:10:01.190394111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:01.190993 containerd[1464]: time="2025-09-13T00:10:01.190801395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:01.191245 containerd[1464]: time="2025-09-13T00:10:01.190848123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:01.191245 containerd[1464]: time="2025-09-13T00:10:01.190916521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:01.191245 containerd[1464]: time="2025-09-13T00:10:01.190944236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:01.191245 containerd[1464]: time="2025-09-13T00:10:01.191066672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:01.191812 containerd[1464]: time="2025-09-13T00:10:01.191640215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:01.194219 containerd[1464]: time="2025-09-13T00:10:01.193838752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:01.194219 containerd[1464]: time="2025-09-13T00:10:01.193917487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:01.194219 containerd[1464]: time="2025-09-13T00:10:01.193945009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:01.194219 containerd[1464]: time="2025-09-13T00:10:01.194060066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:01.237699 systemd[1]: Started cri-containerd-b7ec3bb49eaadac58c286b9a7229c0d2446741d8cb0d76b2651a4c6ea3ef5172.scope - libcontainer container b7ec3bb49eaadac58c286b9a7229c0d2446741d8cb0d76b2651a4c6ea3ef5172. Sep 13 00:10:01.249479 systemd[1]: Started cri-containerd-3e75a1e4fce0fba82f99797c8a97ce7cb7ba3c43c7540f082d4f5c3dde705ccc.scope - libcontainer container 3e75a1e4fce0fba82f99797c8a97ce7cb7ba3c43c7540f082d4f5c3dde705ccc. Sep 13 00:10:01.254270 systemd[1]: Started cri-containerd-d4d5504f4f7b5068c3755e94876c409b1510034bcaa5be15f30fd52b769ddffd.scope - libcontainer container d4d5504f4f7b5068c3755e94876c409b1510034bcaa5be15f30fd52b769ddffd. Sep 13 00:10:01.254914 kubelet[2198]: W0913 00:10:01.252875 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Sep 13 00:10:01.254914 kubelet[2198]: E0913 00:10:01.253388 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:01.293348 kubelet[2198]: W0913 00:10:01.293194 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c&limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Sep 13 00:10:01.293831 kubelet[2198]: E0913 00:10:01.293578 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c&limit=500&resourceVersion=0\": dial tcp 10.128.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:01.339453 containerd[1464]: time="2025-09-13T00:10:01.339375571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c,Uid:2afc54482e52fb9b5e8cf3c4f5a9d99f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e75a1e4fce0fba82f99797c8a97ce7cb7ba3c43c7540f082d4f5c3dde705ccc\"" Sep 13 00:10:01.343725 kubelet[2198]: E0913 00:10:01.342980 2198 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c" Sep 13 00:10:01.345114 containerd[1464]: time="2025-09-13T00:10:01.345068940Z" level=info msg="CreateContainer within sandbox \"3e75a1e4fce0fba82f99797c8a97ce7cb7ba3c43c7540f082d4f5c3dde705ccc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:10:01.366315 containerd[1464]: time="2025-09-13T00:10:01.366253037Z" level=info msg="CreateContainer within sandbox \"3e75a1e4fce0fba82f99797c8a97ce7cb7ba3c43c7540f082d4f5c3dde705ccc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"459e37113ef00194588ab9dee9182b73d7a52b231d721e5291624d834098a303\"" Sep 13 00:10:01.370437 containerd[1464]: time="2025-09-13T00:10:01.370357246Z" level=info msg="StartContainer for \"459e37113ef00194588ab9dee9182b73d7a52b231d721e5291624d834098a303\"" Sep 13 00:10:01.390845 containerd[1464]: time="2025-09-13T00:10:01.390607276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c,Uid:2d6e4b4d7a274760de70b02f5d807a86,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7ec3bb49eaadac58c286b9a7229c0d2446741d8cb0d76b2651a4c6ea3ef5172\"" Sep 13 00:10:01.392858 kubelet[2198]: E0913 00:10:01.392664 2198 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c" Sep 13 00:10:01.395904 containerd[1464]: time="2025-09-13T00:10:01.395754002Z" level=info msg="CreateContainer within sandbox \"b7ec3bb49eaadac58c286b9a7229c0d2446741d8cb0d76b2651a4c6ea3ef5172\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:10:01.404360 containerd[1464]: time="2025-09-13T00:10:01.403506898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c,Uid:b6b28e2eb03f4cc4168ca073ba1b2cd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4d5504f4f7b5068c3755e94876c409b1510034bcaa5be15f30fd52b769ddffd\"" Sep 13 00:10:01.406600 kubelet[2198]: E0913 00:10:01.406558 2198 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6" Sep 13 00:10:01.409087 containerd[1464]: time="2025-09-13T00:10:01.408954630Z" level=info msg="CreateContainer within sandbox \"d4d5504f4f7b5068c3755e94876c409b1510034bcaa5be15f30fd52b769ddffd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:10:01.427836 containerd[1464]: time="2025-09-13T00:10:01.427611053Z" level=info msg="CreateContainer within sandbox \"b7ec3bb49eaadac58c286b9a7229c0d2446741d8cb0d76b2651a4c6ea3ef5172\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2f36e2adc4b06b213e41460b577500568a56351d63e97192cbdf350c896742aa\"" Sep 13 00:10:01.430259 containerd[1464]: time="2025-09-13T00:10:01.428689997Z" level=info msg="StartContainer for \"2f36e2adc4b06b213e41460b577500568a56351d63e97192cbdf350c896742aa\"" Sep 13 00:10:01.444559 containerd[1464]: time="2025-09-13T00:10:01.443218179Z" level=info msg="CreateContainer within sandbox \"d4d5504f4f7b5068c3755e94876c409b1510034bcaa5be15f30fd52b769ddffd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"82ecbe6965c50b841d00121167dc1f4d2c6fcc8de8657a0c2a464eb21ea595d0\"" Sep 13 00:10:01.444891 systemd[1]: Started cri-containerd-459e37113ef00194588ab9dee9182b73d7a52b231d721e5291624d834098a303.scope - libcontainer container 459e37113ef00194588ab9dee9182b73d7a52b231d721e5291624d834098a303. Sep 13 00:10:01.447680 containerd[1464]: time="2025-09-13T00:10:01.447548725Z" level=info msg="StartContainer for \"82ecbe6965c50b841d00121167dc1f4d2c6fcc8de8657a0c2a464eb21ea595d0\"" Sep 13 00:10:01.502073 kubelet[2198]: E0913 00:10:01.500769 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c?timeout=10s\": dial tcp 10.128.0.50:6443: connect: connection refused" interval="1.6s" Sep 13 00:10:01.512472 systemd[1]: Started cri-containerd-2f36e2adc4b06b213e41460b577500568a56351d63e97192cbdf350c896742aa.scope - libcontainer container 2f36e2adc4b06b213e41460b577500568a56351d63e97192cbdf350c896742aa. Sep 13 00:10:01.529214 systemd[1]: Started cri-containerd-82ecbe6965c50b841d00121167dc1f4d2c6fcc8de8657a0c2a464eb21ea595d0.scope - libcontainer container 82ecbe6965c50b841d00121167dc1f4d2c6fcc8de8657a0c2a464eb21ea595d0. Sep 13 00:10:01.575997 containerd[1464]: time="2025-09-13T00:10:01.575709108Z" level=info msg="StartContainer for \"459e37113ef00194588ab9dee9182b73d7a52b231d721e5291624d834098a303\" returns successfully" Sep 13 00:10:01.623665 systemd[1]: Started sshd@7-10.128.0.50:22-80.94.95.115:33620.service - OpenSSH per-connection server daemon (80.94.95.115:33620). Sep 13 00:10:01.665782 containerd[1464]: time="2025-09-13T00:10:01.665549449Z" level=info msg="StartContainer for \"2f36e2adc4b06b213e41460b577500568a56351d63e97192cbdf350c896742aa\" returns successfully" Sep 13 00:10:01.678648 containerd[1464]: time="2025-09-13T00:10:01.678150822Z" level=info msg="StartContainer for \"82ecbe6965c50b841d00121167dc1f4d2c6fcc8de8657a0c2a464eb21ea595d0\" returns successfully" Sep 13 00:10:01.714572 kubelet[2198]: I0913 00:10:01.714344 2198 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:01.715474 kubelet[2198]: E0913 00:10:01.715375 2198 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.50:6443/api/v1/nodes\": dial tcp 10.128.0.50:6443: connect: connection refused" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:02.146872 kubelet[2198]: E0913 00:10:02.146637 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" not found" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:02.147260 kubelet[2198]: E0913 00:10:02.147127 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" not found" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:02.154880 kubelet[2198]: E0913 00:10:02.154837 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" not found" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:03.005603 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 00:10:03.159120 kubelet[2198]: E0913 00:10:03.159068 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" not found" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:03.161256 kubelet[2198]: E0913 00:10:03.160527 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" not found" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:03.321955 kubelet[2198]: I0913 00:10:03.321805 2198 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:03.717347 kubelet[2198]: E0913 00:10:03.716027 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" not found" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:05.017754 kubelet[2198]: E0913 00:10:05.017692 2198 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" not found" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:05.063660 kubelet[2198]: I0913 00:10:05.063608 2198 apiserver.go:52] "Watching apiserver" Sep 13 00:10:05.100201 kubelet[2198]: I0913 00:10:05.100149 2198 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:10:05.106367 kubelet[2198]: I0913 00:10:05.106316 2198 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:05.106559 kubelet[2198]: E0913 00:10:05.106395 2198 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\": node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" not found" Sep 13 00:10:05.195479 kubelet[2198]: I0913 00:10:05.195426 2198 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:05.216492 kubelet[2198]: E0913 00:10:05.216427 2198 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:05.216492 kubelet[2198]: I0913 00:10:05.216488 2198 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:05.219482 kubelet[2198]: E0913 00:10:05.219436 2198 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:05.219482 kubelet[2198]: I0913 00:10:05.219480 2198 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:05.224134 kubelet[2198]: E0913 00:10:05.224047 2198 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:05.793061 sshd[2447]: Connection closed by authenticating user root 80.94.95.115 port 33620 [preauth] Sep 13 00:10:05.798487 systemd[1]: sshd@7-10.128.0.50:22-80.94.95.115:33620.service: Deactivated successfully. Sep 13 00:10:07.253659 systemd[1]: Reloading requested from client PID 2484 ('systemctl') (unit session-7.scope)... Sep 13 00:10:07.253681 systemd[1]: Reloading... Sep 13 00:10:07.396306 zram_generator::config[2527]: No configuration found. Sep 13 00:10:07.538834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:07.667023 systemd[1]: Reloading finished in 412 ms. Sep 13 00:10:07.723127 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:07.734444 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:10:07.734840 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:07.734996 systemd[1]: kubelet.service: Consumed 1.363s CPU time, 129.0M memory peak, 0B memory swap peak. Sep 13 00:10:07.741804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:08.103304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:08.118904 (kubelet)[2572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:10:08.192018 kubelet[2572]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:08.192018 kubelet[2572]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:10:08.192018 kubelet[2572]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:08.192637 kubelet[2572]: I0913 00:10:08.192114 2572 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:10:08.200004 kubelet[2572]: I0913 00:10:08.199947 2572 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:10:08.200004 kubelet[2572]: I0913 00:10:08.199982 2572 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:10:08.201279 kubelet[2572]: I0913 00:10:08.200689 2572 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:10:08.204808 kubelet[2572]: I0913 00:10:08.204775 2572 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:10:08.208478 kubelet[2572]: I0913 00:10:08.208442 2572 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:10:08.213249 kubelet[2572]: E0913 00:10:08.213170 2572 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:10:08.213249 kubelet[2572]: I0913 00:10:08.213223 2572 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:10:08.217658 kubelet[2572]: I0913 00:10:08.217588 2572 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:10:08.218018 kubelet[2572]: I0913 00:10:08.217957 2572 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:10:08.218299 kubelet[2572]: I0913 00:10:08.218003 2572 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:10:08.218491 kubelet[2572]: I0913 00:10:08.218301 2572 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:10:08.218491 kubelet[2572]: I0913 00:10:08.218322 2572 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:10:08.218491 kubelet[2572]: I0913 00:10:08.218397 2572 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:08.218651 kubelet[2572]: I0913 00:10:08.218636 2572 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:10:08.218707 kubelet[2572]: I0913 00:10:08.218682 2572 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:10:08.218757 kubelet[2572]: I0913 00:10:08.218710 2572 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:10:08.218757 kubelet[2572]: I0913 00:10:08.218727 2572 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:10:08.223063 kubelet[2572]: I0913 00:10:08.222418 2572 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:10:08.223440 kubelet[2572]: I0913 00:10:08.223149 2572 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:10:08.223914 kubelet[2572]: I0913 00:10:08.223818 2572 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:10:08.223914 kubelet[2572]: I0913 00:10:08.223860 2572 server.go:1287] "Started kubelet" Sep 13 00:10:08.235131 kubelet[2572]: I0913 00:10:08.234892 2572 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:10:08.257586 kubelet[2572]: I0913 00:10:08.256847 2572 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:10:08.260815 kubelet[2572]: I0913 00:10:08.260767 2572 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:10:08.260975 kubelet[2572]: I0913 00:10:08.260471 2572 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:10:08.271400 kubelet[2572]: I0913 00:10:08.270021 2572 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:10:08.273585 kubelet[2572]: I0913 00:10:08.273543 2572 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:10:08.278926 kubelet[2572]: I0913 00:10:08.277330 2572 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:10:08.278926 kubelet[2572]: E0913 00:10:08.277755 2572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" not found" Sep 13 00:10:08.284813 kubelet[2572]: I0913 00:10:08.283925 2572 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:10:08.284813 kubelet[2572]: I0913 00:10:08.284110 2572 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:10:08.287413 kubelet[2572]: I0913 00:10:08.287219 2572 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:10:08.287859 kubelet[2572]: I0913 00:10:08.287386 2572 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:10:08.300889 kubelet[2572]: I0913 00:10:08.300848 2572 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:10:08.301162 kubelet[2572]: E0913 00:10:08.300575 2572 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:10:08.309920 kubelet[2572]: I0913 00:10:08.308309 2572 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:10:08.329011 kubelet[2572]: I0913 00:10:08.328498 2572 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:10:08.329011 kubelet[2572]: I0913 00:10:08.328545 2572 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:10:08.329011 kubelet[2572]: I0913 00:10:08.328580 2572 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:10:08.329011 kubelet[2572]: I0913 00:10:08.328592 2572 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:10:08.329011 kubelet[2572]: E0913 00:10:08.328663 2572 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:10:08.431872 kubelet[2572]: E0913 00:10:08.430103 2572 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:10:08.446750 kubelet[2572]: I0913 00:10:08.446574 2572 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:10:08.446750 kubelet[2572]: I0913 00:10:08.446599 2572 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:10:08.446750 kubelet[2572]: I0913 00:10:08.446629 2572 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:08.449819 kubelet[2572]: I0913 00:10:08.447974 2572 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:10:08.449819 kubelet[2572]: I0913 00:10:08.447996 2572 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:10:08.449819 kubelet[2572]: I0913 00:10:08.448028 2572 policy_none.go:49] "None policy: Start" Sep 13 00:10:08.449819 kubelet[2572]: I0913 00:10:08.448046 2572 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:10:08.449819 kubelet[2572]: I0913 00:10:08.448064 2572 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:10:08.449819 kubelet[2572]: I0913 00:10:08.448283 2572 state_mem.go:75] "Updated machine memory state" Sep 13 00:10:08.466699 kubelet[2572]: I0913 00:10:08.466415 2572 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:10:08.467952 kubelet[2572]: I0913 00:10:08.467847 2572 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:10:08.469240 kubelet[2572]: I0913 00:10:08.467873 2572 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:10:08.470764 kubelet[2572]: I0913 00:10:08.470597 2572 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:10:08.476258 kubelet[2572]: E0913 00:10:08.476151 2572 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:10:08.602807 kubelet[2572]: I0913 00:10:08.601776 2572 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:08.612132 kubelet[2572]: I0913 00:10:08.611813 2572 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:08.612132 kubelet[2572]: I0913 00:10:08.611922 2572 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:08.634038 kubelet[2572]: I0913 00:10:08.633433 2572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:08.638197 kubelet[2572]: I0913 00:10:08.637388 2572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:08.638197 kubelet[2572]: I0913 00:10:08.637865 2572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:08.648337 kubelet[2572]: W0913 00:10:08.648218 2572 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 13 00:10:08.651568 kubelet[2572]: W0913 00:10:08.651527 2572 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 13 00:10:08.654157 kubelet[2572]: W0913 00:10:08.654117 2572 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 13 00:10:08.691687 kubelet[2572]: I0913 00:10:08.691090 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d6e4b4d7a274760de70b02f5d807a86-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"2d6e4b4d7a274760de70b02f5d807a86\") " pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:08.691687 kubelet[2572]: I0913 00:10:08.691156 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d6e4b4d7a274760de70b02f5d807a86-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"2d6e4b4d7a274760de70b02f5d807a86\") " pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:08.691687 kubelet[2572]: I0913 00:10:08.691201 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d6e4b4d7a274760de70b02f5d807a86-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"2d6e4b4d7a274760de70b02f5d807a86\") " pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:08.691687 kubelet[2572]: I0913 00:10:08.691258 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6b28e2eb03f4cc4168ca073ba1b2cd8-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"b6b28e2eb03f4cc4168ca073ba1b2cd8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:08.692016 kubelet[2572]: I0913 00:10:08.691291 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6b28e2eb03f4cc4168ca073ba1b2cd8-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"b6b28e2eb03f4cc4168ca073ba1b2cd8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:08.692016 kubelet[2572]: I0913 00:10:08.691322 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6b28e2eb03f4cc4168ca073ba1b2cd8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"b6b28e2eb03f4cc4168ca073ba1b2cd8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:08.692016 kubelet[2572]: I0913 00:10:08.691353 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6b28e2eb03f4cc4168ca073ba1b2cd8-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"b6b28e2eb03f4cc4168ca073ba1b2cd8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:08.692016 kubelet[2572]: I0913 00:10:08.691384 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b6b28e2eb03f4cc4168ca073ba1b2cd8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"b6b28e2eb03f4cc4168ca073ba1b2cd8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:08.692252 kubelet[2572]: I0913 00:10:08.691430 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2afc54482e52fb9b5e8cf3c4f5a9d99f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" (UID: \"2afc54482e52fb9b5e8cf3c4f5a9d99f\") " pod="kube-system/kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:09.231149 kubelet[2572]: I0913 00:10:09.231075 2572 apiserver.go:52] "Watching apiserver" Sep 13 00:10:09.284804 kubelet[2572]: I0913 00:10:09.284732 2572 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:10:09.341367 kubelet[2572]: I0913 00:10:09.340773 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" podStartSLOduration=1.340746561 podStartE2EDuration="1.340746561s" podCreationTimestamp="2025-09-13 00:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:09.322640418 +0000 UTC m=+1.196657381" watchObservedRunningTime="2025-09-13 00:10:09.340746561 +0000 UTC m=+1.214763520" Sep 13 00:10:09.352969 kubelet[2572]: I0913 00:10:09.352908 2572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:09.363787 kubelet[2572]: W0913 00:10:09.363355 2572 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 13 00:10:09.363787 kubelet[2572]: E0913 00:10:09.363445 2572 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:09.365545 kubelet[2572]: I0913 00:10:09.365323 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" podStartSLOduration=1.36529999 podStartE2EDuration="1.36529999s" podCreationTimestamp="2025-09-13 00:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:09.341154077 +0000 UTC m=+1.215171042" watchObservedRunningTime="2025-09-13 00:10:09.36529999 +0000 UTC m=+1.239316953" Sep 13 00:10:09.382074 kubelet[2572]: I0913 00:10:09.381686 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" podStartSLOduration=1.381643166 podStartE2EDuration="1.381643166s" podCreationTimestamp="2025-09-13 00:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:09.366156137 +0000 UTC m=+1.240173089" watchObservedRunningTime="2025-09-13 00:10:09.381643166 +0000 UTC m=+1.255660130" Sep 13 00:10:13.067578 kubelet[2572]: I0913 00:10:13.067534 2572 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:10:13.069104 containerd[1464]: time="2025-09-13T00:10:13.068555633Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:10:13.069653 kubelet[2572]: I0913 00:10:13.069121 2572 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:10:13.737814 systemd[1]: Created slice kubepods-besteffort-podefb9b94a_0052_4c7f_b9cd_ad7d71a714cb.slice - libcontainer container kubepods-besteffort-podefb9b94a_0052_4c7f_b9cd_ad7d71a714cb.slice. Sep 13 00:10:13.821605 kubelet[2572]: I0913 00:10:13.821542 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/efb9b94a-0052-4c7f-b9cd-ad7d71a714cb-kube-proxy\") pod \"kube-proxy-2kklz\" (UID: \"efb9b94a-0052-4c7f-b9cd-ad7d71a714cb\") " pod="kube-system/kube-proxy-2kklz" Sep 13 00:10:13.821605 kubelet[2572]: I0913 00:10:13.821595 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efb9b94a-0052-4c7f-b9cd-ad7d71a714cb-lib-modules\") pod \"kube-proxy-2kklz\" (UID: \"efb9b94a-0052-4c7f-b9cd-ad7d71a714cb\") " pod="kube-system/kube-proxy-2kklz" Sep 13 00:10:13.821605 kubelet[2572]: I0913 00:10:13.821628 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfr6l\" (UniqueName: \"kubernetes.io/projected/efb9b94a-0052-4c7f-b9cd-ad7d71a714cb-kube-api-access-kfr6l\") pod \"kube-proxy-2kklz\" (UID: \"efb9b94a-0052-4c7f-b9cd-ad7d71a714cb\") " pod="kube-system/kube-proxy-2kklz" Sep 13 00:10:13.821877 kubelet[2572]: I0913 00:10:13.821658 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efb9b94a-0052-4c7f-b9cd-ad7d71a714cb-xtables-lock\") pod \"kube-proxy-2kklz\" (UID: \"efb9b94a-0052-4c7f-b9cd-ad7d71a714cb\") " pod="kube-system/kube-proxy-2kklz" Sep 13 00:10:14.049523 containerd[1464]: time="2025-09-13T00:10:14.048922789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2kklz,Uid:efb9b94a-0052-4c7f-b9cd-ad7d71a714cb,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:14.087660 containerd[1464]: time="2025-09-13T00:10:14.087172338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:14.087660 containerd[1464]: time="2025-09-13T00:10:14.087302646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:14.087660 containerd[1464]: time="2025-09-13T00:10:14.087330553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:14.087660 containerd[1464]: time="2025-09-13T00:10:14.087481011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:14.135528 systemd[1]: Started cri-containerd-5e71c2a03601def91c5ec9b6151bb3954c5f51d6167c1a0dc5c0dd31075439c5.scope - libcontainer container 5e71c2a03601def91c5ec9b6151bb3954c5f51d6167c1a0dc5c0dd31075439c5. Sep 13 00:10:14.191838 containerd[1464]: time="2025-09-13T00:10:14.191761769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2kklz,Uid:efb9b94a-0052-4c7f-b9cd-ad7d71a714cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e71c2a03601def91c5ec9b6151bb3954c5f51d6167c1a0dc5c0dd31075439c5\"" Sep 13 00:10:14.198385 containerd[1464]: time="2025-09-13T00:10:14.198163241Z" level=info msg="CreateContainer within sandbox \"5e71c2a03601def91c5ec9b6151bb3954c5f51d6167c1a0dc5c0dd31075439c5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:10:14.226352 containerd[1464]: time="2025-09-13T00:10:14.226095268Z" level=info msg="CreateContainer within sandbox \"5e71c2a03601def91c5ec9b6151bb3954c5f51d6167c1a0dc5c0dd31075439c5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f798efb33e1167a21af8d0c66228281e9312634f103584adce9c4b67dd466bfe\"" Sep 13 00:10:14.228353 containerd[1464]: time="2025-09-13T00:10:14.227528048Z" level=info msg="StartContainer for \"f798efb33e1167a21af8d0c66228281e9312634f103584adce9c4b67dd466bfe\"" Sep 13 00:10:14.245689 systemd[1]: Created slice kubepods-besteffort-podb3adc428_4d3a_4fa3_945b_de0433a714fa.slice - libcontainer container kubepods-besteffort-podb3adc428_4d3a_4fa3_945b_de0433a714fa.slice. Sep 13 00:10:14.320578 systemd[1]: Started cri-containerd-f798efb33e1167a21af8d0c66228281e9312634f103584adce9c4b67dd466bfe.scope - libcontainer container f798efb33e1167a21af8d0c66228281e9312634f103584adce9c4b67dd466bfe. Sep 13 00:10:14.325863 kubelet[2572]: I0913 00:10:14.325759 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b3adc428-4d3a-4fa3-945b-de0433a714fa-var-lib-calico\") pod \"tigera-operator-755d956888-kqmw4\" (UID: \"b3adc428-4d3a-4fa3-945b-de0433a714fa\") " pod="tigera-operator/tigera-operator-755d956888-kqmw4" Sep 13 00:10:14.325863 kubelet[2572]: I0913 00:10:14.325827 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr874\" (UniqueName: \"kubernetes.io/projected/b3adc428-4d3a-4fa3-945b-de0433a714fa-kube-api-access-kr874\") pod \"tigera-operator-755d956888-kqmw4\" (UID: \"b3adc428-4d3a-4fa3-945b-de0433a714fa\") " pod="tigera-operator/tigera-operator-755d956888-kqmw4" Sep 13 00:10:14.370948 containerd[1464]: time="2025-09-13T00:10:14.370878316Z" level=info msg="StartContainer for \"f798efb33e1167a21af8d0c66228281e9312634f103584adce9c4b67dd466bfe\" returns successfully" Sep 13 00:10:14.553788 containerd[1464]: time="2025-09-13T00:10:14.553298222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-kqmw4,Uid:b3adc428-4d3a-4fa3-945b-de0433a714fa,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:10:14.600081 containerd[1464]: time="2025-09-13T00:10:14.599661755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:14.600081 containerd[1464]: time="2025-09-13T00:10:14.599740434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:14.600081 containerd[1464]: time="2025-09-13T00:10:14.599766028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:14.600081 containerd[1464]: time="2025-09-13T00:10:14.599905884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:14.629541 systemd[1]: Started cri-containerd-ce09b8769840780649cf6bf7cb86af927ebca3a5357384293cdd50944ebeba96.scope - libcontainer container ce09b8769840780649cf6bf7cb86af927ebca3a5357384293cdd50944ebeba96. Sep 13 00:10:14.700976 containerd[1464]: time="2025-09-13T00:10:14.700166280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-kqmw4,Uid:b3adc428-4d3a-4fa3-945b-de0433a714fa,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ce09b8769840780649cf6bf7cb86af927ebca3a5357384293cdd50944ebeba96\"" Sep 13 00:10:14.705449 containerd[1464]: time="2025-09-13T00:10:14.705093582Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:10:15.725835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571595673.mount: Deactivated successfully. Sep 13 00:10:16.975982 containerd[1464]: time="2025-09-13T00:10:16.975903558Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:16.977719 containerd[1464]: time="2025-09-13T00:10:16.977469885Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:10:16.980974 containerd[1464]: time="2025-09-13T00:10:16.979120182Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:16.983841 containerd[1464]: time="2025-09-13T00:10:16.982432091Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:16.983841 containerd[1464]: time="2025-09-13T00:10:16.983683168Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.278536521s" Sep 13 00:10:16.983841 containerd[1464]: time="2025-09-13T00:10:16.983729301Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:10:16.987654 containerd[1464]: time="2025-09-13T00:10:16.987605048Z" level=info msg="CreateContainer within sandbox \"ce09b8769840780649cf6bf7cb86af927ebca3a5357384293cdd50944ebeba96\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:10:17.016254 containerd[1464]: time="2025-09-13T00:10:17.013593171Z" level=info msg="CreateContainer within sandbox \"ce09b8769840780649cf6bf7cb86af927ebca3a5357384293cdd50944ebeba96\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b14b4d35e737b9f1c8743965ad08dc524f4824313d230d33172440e7564883ba\"" Sep 13 00:10:17.016254 containerd[1464]: time="2025-09-13T00:10:17.014550705Z" level=info msg="StartContainer for \"b14b4d35e737b9f1c8743965ad08dc524f4824313d230d33172440e7564883ba\"" Sep 13 00:10:17.066484 systemd[1]: Started cri-containerd-b14b4d35e737b9f1c8743965ad08dc524f4824313d230d33172440e7564883ba.scope - libcontainer container b14b4d35e737b9f1c8743965ad08dc524f4824313d230d33172440e7564883ba. Sep 13 00:10:17.110032 containerd[1464]: time="2025-09-13T00:10:17.109970990Z" level=info msg="StartContainer for \"b14b4d35e737b9f1c8743965ad08dc524f4824313d230d33172440e7564883ba\" returns successfully" Sep 13 00:10:17.145338 update_engine[1444]: I20250913 00:10:17.144308 1444 update_attempter.cc:509] Updating boot flags... Sep 13 00:10:17.225293 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2921) Sep 13 00:10:17.352519 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2731) Sep 13 00:10:17.423395 kubelet[2572]: I0913 00:10:17.423315 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2kklz" podStartSLOduration=4.423282131 podStartE2EDuration="4.423282131s" podCreationTimestamp="2025-09-13 00:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:15.39230899 +0000 UTC m=+7.266325952" watchObservedRunningTime="2025-09-13 00:10:17.423282131 +0000 UTC m=+9.297299095" Sep 13 00:10:17.816173 kubelet[2572]: I0913 00:10:17.816069 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-kqmw4" podStartSLOduration=1.535189138 podStartE2EDuration="3.816038408s" podCreationTimestamp="2025-09-13 00:10:14 +0000 UTC" firstStartedPulling="2025-09-13 00:10:14.704268 +0000 UTC m=+6.578284946" lastFinishedPulling="2025-09-13 00:10:16.985117278 +0000 UTC m=+8.859134216" observedRunningTime="2025-09-13 00:10:17.423753919 +0000 UTC m=+9.297770904" watchObservedRunningTime="2025-09-13 00:10:17.816038408 +0000 UTC m=+9.690055371" Sep 13 00:10:22.655688 sudo[1708]: pam_unix(sudo:session): session closed for user root Sep 13 00:10:22.716723 sshd[1705]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:22.725760 systemd[1]: sshd@6-10.128.0.50:22-147.75.109.163:55532.service: Deactivated successfully. Sep 13 00:10:22.729091 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:10:22.730376 systemd[1]: session-7.scope: Consumed 6.880s CPU time, 155.5M memory peak, 0B memory swap peak. Sep 13 00:10:22.734659 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:10:22.737768 systemd-logind[1439]: Removed session 7. Sep 13 00:10:28.782933 systemd[1]: Created slice kubepods-besteffort-podd5846ec2_cfc5_4141_9a08_cff12b660b30.slice - libcontainer container kubepods-besteffort-podd5846ec2_cfc5_4141_9a08_cff12b660b30.slice. Sep 13 00:10:28.832047 kubelet[2572]: I0913 00:10:28.831977 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5846ec2-cfc5-4141-9a08-cff12b660b30-tigera-ca-bundle\") pod \"calico-typha-54995c46f8-n8f8p\" (UID: \"d5846ec2-cfc5-4141-9a08-cff12b660b30\") " pod="calico-system/calico-typha-54995c46f8-n8f8p" Sep 13 00:10:28.832047 kubelet[2572]: I0913 00:10:28.832056 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d5846ec2-cfc5-4141-9a08-cff12b660b30-typha-certs\") pod \"calico-typha-54995c46f8-n8f8p\" (UID: \"d5846ec2-cfc5-4141-9a08-cff12b660b30\") " pod="calico-system/calico-typha-54995c46f8-n8f8p" Sep 13 00:10:28.832863 kubelet[2572]: I0913 00:10:28.832085 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rldkt\" (UniqueName: \"kubernetes.io/projected/d5846ec2-cfc5-4141-9a08-cff12b660b30-kube-api-access-rldkt\") pod \"calico-typha-54995c46f8-n8f8p\" (UID: \"d5846ec2-cfc5-4141-9a08-cff12b660b30\") " pod="calico-system/calico-typha-54995c46f8-n8f8p" Sep 13 00:10:29.090728 containerd[1464]: time="2025-09-13T00:10:29.090196676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54995c46f8-n8f8p,Uid:d5846ec2-cfc5-4141-9a08-cff12b660b30,Namespace:calico-system,Attempt:0,}" Sep 13 00:10:29.153925 systemd[1]: Created slice kubepods-besteffort-pod40c3fd1d_de27_482b_ad30_da741a908c6f.slice - libcontainer container kubepods-besteffort-pod40c3fd1d_de27_482b_ad30_da741a908c6f.slice. Sep 13 00:10:29.169077 containerd[1464]: time="2025-09-13T00:10:29.168852312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:29.169077 containerd[1464]: time="2025-09-13T00:10:29.168953324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:29.169077 containerd[1464]: time="2025-09-13T00:10:29.169016018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:29.169686 containerd[1464]: time="2025-09-13T00:10:29.169426834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:29.216625 systemd[1]: Started cri-containerd-f11dd3e6292772738744bf24e8bb36971c989b62eb4a17c1f45e39f0c2f0ff72.scope - libcontainer container f11dd3e6292772738744bf24e8bb36971c989b62eb4a17c1f45e39f0c2f0ff72. Sep 13 00:10:29.236764 kubelet[2572]: I0913 00:10:29.236709 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/40c3fd1d-de27-482b-ad30-da741a908c6f-cni-net-dir\") pod \"calico-node-wgqsq\" (UID: \"40c3fd1d-de27-482b-ad30-da741a908c6f\") " pod="calico-system/calico-node-wgqsq" Sep 13 00:10:29.237732 kubelet[2572]: I0913 00:10:29.237407 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40c3fd1d-de27-482b-ad30-da741a908c6f-xtables-lock\") pod \"calico-node-wgqsq\" (UID: \"40c3fd1d-de27-482b-ad30-da741a908c6f\") " pod="calico-system/calico-node-wgqsq" Sep 13 00:10:29.238307 kubelet[2572]: I0913 00:10:29.237892 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40c3fd1d-de27-482b-ad30-da741a908c6f-lib-modules\") pod \"calico-node-wgqsq\" (UID: \"40c3fd1d-de27-482b-ad30-da741a908c6f\") " pod="calico-system/calico-node-wgqsq" Sep 13 00:10:29.239336 kubelet[2572]: I0913 00:10:29.238510 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/40c3fd1d-de27-482b-ad30-da741a908c6f-node-certs\") pod \"calico-node-wgqsq\" (UID: \"40c3fd1d-de27-482b-ad30-da741a908c6f\") " pod="calico-system/calico-node-wgqsq" Sep 13 00:10:29.239336 kubelet[2572]: I0913 00:10:29.239290 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/40c3fd1d-de27-482b-ad30-da741a908c6f-var-lib-calico\") pod \"calico-node-wgqsq\" (UID: \"40c3fd1d-de27-482b-ad30-da741a908c6f\") " pod="calico-system/calico-node-wgqsq" Sep 13 00:10:29.241461 kubelet[2572]: I0913 00:10:29.239692 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/40c3fd1d-de27-482b-ad30-da741a908c6f-policysync\") pod \"calico-node-wgqsq\" (UID: \"40c3fd1d-de27-482b-ad30-da741a908c6f\") " pod="calico-system/calico-node-wgqsq" Sep 13 00:10:29.241461 kubelet[2572]: I0913 00:10:29.239858 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmfvx\" (UniqueName: \"kubernetes.io/projected/40c3fd1d-de27-482b-ad30-da741a908c6f-kube-api-access-bmfvx\") pod \"calico-node-wgqsq\" (UID: \"40c3fd1d-de27-482b-ad30-da741a908c6f\") " pod="calico-system/calico-node-wgqsq" Sep 13 00:10:29.241461 kubelet[2572]: I0913 00:10:29.239910 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40c3fd1d-de27-482b-ad30-da741a908c6f-tigera-ca-bundle\") pod \"calico-node-wgqsq\" (UID: \"40c3fd1d-de27-482b-ad30-da741a908c6f\") " pod="calico-system/calico-node-wgqsq" Sep 13 00:10:29.241461 kubelet[2572]: I0913 00:10:29.240127 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/40c3fd1d-de27-482b-ad30-da741a908c6f-cni-bin-dir\") pod \"calico-node-wgqsq\" (UID: \"40c3fd1d-de27-482b-ad30-da741a908c6f\") " pod="calico-system/calico-node-wgqsq" Sep 13 00:10:29.241461 kubelet[2572]: I0913 00:10:29.241217 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/40c3fd1d-de27-482b-ad30-da741a908c6f-cni-log-dir\") pod \"calico-node-wgqsq\" (UID: \"40c3fd1d-de27-482b-ad30-da741a908c6f\") " pod="calico-system/calico-node-wgqsq" Sep 13 00:10:29.241763 kubelet[2572]: I0913 00:10:29.241293 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/40c3fd1d-de27-482b-ad30-da741a908c6f-var-run-calico\") pod \"calico-node-wgqsq\" (UID: \"40c3fd1d-de27-482b-ad30-da741a908c6f\") " pod="calico-system/calico-node-wgqsq" Sep 13 00:10:29.241763 kubelet[2572]: I0913 00:10:29.241346 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/40c3fd1d-de27-482b-ad30-da741a908c6f-flexvol-driver-host\") pod \"calico-node-wgqsq\" (UID: \"40c3fd1d-de27-482b-ad30-da741a908c6f\") " pod="calico-system/calico-node-wgqsq" Sep 13 00:10:29.340962 containerd[1464]: time="2025-09-13T00:10:29.340754958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54995c46f8-n8f8p,Uid:d5846ec2-cfc5-4141-9a08-cff12b660b30,Namespace:calico-system,Attempt:0,} returns sandbox id \"f11dd3e6292772738744bf24e8bb36971c989b62eb4a17c1f45e39f0c2f0ff72\"" Sep 13 00:10:29.346436 containerd[1464]: time="2025-09-13T00:10:29.345857151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:10:29.346964 kubelet[2572]: E0913 00:10:29.346610 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.346964 kubelet[2572]: W0913 00:10:29.346647 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.346964 kubelet[2572]: E0913 00:10:29.346693 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.348140 kubelet[2572]: E0913 00:10:29.348095 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.348140 kubelet[2572]: W0913 00:10:29.348134 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.348701 kubelet[2572]: E0913 00:10:29.348166 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.349890 kubelet[2572]: E0913 00:10:29.349197 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.350069 kubelet[2572]: W0913 00:10:29.349890 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.350069 kubelet[2572]: E0913 00:10:29.349946 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.350662 kubelet[2572]: E0913 00:10:29.350638 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.350662 kubelet[2572]: W0913 00:10:29.350659 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.350819 kubelet[2572]: E0913 00:10:29.350727 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.351329 kubelet[2572]: E0913 00:10:29.351310 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.351329 kubelet[2572]: W0913 00:10:29.351329 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.351585 kubelet[2572]: E0913 00:10:29.351490 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.351944 kubelet[2572]: E0913 00:10:29.351910 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.351944 kubelet[2572]: W0913 00:10:29.351932 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.352169 kubelet[2572]: E0913 00:10:29.352125 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.352788 kubelet[2572]: E0913 00:10:29.352754 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.352788 kubelet[2572]: W0913 00:10:29.352785 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.353025 kubelet[2572]: E0913 00:10:29.352872 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.353526 kubelet[2572]: E0913 00:10:29.353491 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.353526 kubelet[2572]: W0913 00:10:29.353511 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.353740 kubelet[2572]: E0913 00:10:29.353698 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.354016 kubelet[2572]: E0913 00:10:29.353967 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.354016 kubelet[2572]: W0913 00:10:29.353982 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.354425 kubelet[2572]: E0913 00:10:29.354123 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.354425 kubelet[2572]: E0913 00:10:29.354394 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.354425 kubelet[2572]: W0913 00:10:29.354410 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.354766 kubelet[2572]: E0913 00:10:29.354458 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.354998 kubelet[2572]: E0913 00:10:29.354975 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.355088 kubelet[2572]: W0913 00:10:29.355014 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.355149 kubelet[2572]: E0913 00:10:29.355094 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.355490 kubelet[2572]: E0913 00:10:29.355470 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.355490 kubelet[2572]: W0913 00:10:29.355488 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.355651 kubelet[2572]: E0913 00:10:29.355631 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.355945 kubelet[2572]: E0913 00:10:29.355907 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.355945 kubelet[2572]: W0913 00:10:29.355928 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.356214 kubelet[2572]: E0913 00:10:29.356074 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.356336 kubelet[2572]: E0913 00:10:29.356291 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.356336 kubelet[2572]: W0913 00:10:29.356316 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.356476 kubelet[2572]: E0913 00:10:29.356455 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.356758 kubelet[2572]: E0913 00:10:29.356725 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.356758 kubelet[2572]: W0913 00:10:29.356745 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.357020 kubelet[2572]: E0913 00:10:29.356891 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.357104 kubelet[2572]: E0913 00:10:29.357091 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.357169 kubelet[2572]: W0913 00:10:29.357104 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.357280 kubelet[2572]: E0913 00:10:29.357207 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.357625 kubelet[2572]: E0913 00:10:29.357575 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.357625 kubelet[2572]: W0913 00:10:29.357592 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.357752 kubelet[2572]: E0913 00:10:29.357734 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.358107 kubelet[2572]: E0913 00:10:29.358073 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.358107 kubelet[2572]: W0913 00:10:29.358090 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.358277 kubelet[2572]: E0913 00:10:29.358214 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.359643 kubelet[2572]: E0913 00:10:29.359620 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.360176 kubelet[2572]: W0913 00:10:29.360131 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.360589 kubelet[2572]: E0913 00:10:29.360318 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.361974 kubelet[2572]: E0913 00:10:29.361951 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.362128 kubelet[2572]: W0913 00:10:29.362038 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.362861 kubelet[2572]: E0913 00:10:29.362121 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.363714 kubelet[2572]: E0913 00:10:29.363677 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.363714 kubelet[2572]: W0913 00:10:29.363697 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.363714 kubelet[2572]: E0913 00:10:29.363716 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.371777 kubelet[2572]: E0913 00:10:29.371738 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.372047 kubelet[2572]: W0913 00:10:29.371869 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.372047 kubelet[2572]: E0913 00:10:29.371897 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.382056 kubelet[2572]: E0913 00:10:29.381919 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.382056 kubelet[2572]: W0913 00:10:29.381949 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.382056 kubelet[2572]: E0913 00:10:29.381979 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.437082 kubelet[2572]: E0913 00:10:29.436684 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-st7d7" podUID="dbe2014e-59d9-4e26-9bc7-323114f09c1f" Sep 13 00:10:29.462268 containerd[1464]: time="2025-09-13T00:10:29.462200217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wgqsq,Uid:40c3fd1d-de27-482b-ad30-da741a908c6f,Namespace:calico-system,Attempt:0,}" Sep 13 00:10:29.521547 containerd[1464]: time="2025-09-13T00:10:29.519665734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:29.521547 containerd[1464]: time="2025-09-13T00:10:29.521380886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:29.521547 containerd[1464]: time="2025-09-13T00:10:29.521482714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:29.523111 containerd[1464]: time="2025-09-13T00:10:29.521809130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:29.523792 kubelet[2572]: E0913 00:10:29.523648 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.523792 kubelet[2572]: W0913 00:10:29.523682 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.523792 kubelet[2572]: E0913 00:10:29.523715 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.524987 kubelet[2572]: E0913 00:10:29.524544 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.524987 kubelet[2572]: W0913 00:10:29.524567 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.524987 kubelet[2572]: E0913 00:10:29.524589 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.526035 kubelet[2572]: E0913 00:10:29.525915 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.526035 kubelet[2572]: W0913 00:10:29.525934 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.526035 kubelet[2572]: E0913 00:10:29.525954 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.527598 kubelet[2572]: E0913 00:10:29.527220 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.527598 kubelet[2572]: W0913 00:10:29.527258 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.527598 kubelet[2572]: E0913 00:10:29.527280 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.529568 kubelet[2572]: E0913 00:10:29.529331 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.529568 kubelet[2572]: W0913 00:10:29.529362 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.529568 kubelet[2572]: E0913 00:10:29.529381 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.530079 kubelet[2572]: E0913 00:10:29.529916 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.530079 kubelet[2572]: W0913 00:10:29.529935 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.530079 kubelet[2572]: E0913 00:10:29.529954 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.531419 kubelet[2572]: E0913 00:10:29.531200 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.531419 kubelet[2572]: W0913 00:10:29.531220 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.531419 kubelet[2572]: E0913 00:10:29.531277 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.532044 kubelet[2572]: E0913 00:10:29.531897 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.532044 kubelet[2572]: W0913 00:10:29.531917 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.532044 kubelet[2572]: E0913 00:10:29.531936 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.532766 kubelet[2572]: E0913 00:10:29.532648 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.532766 kubelet[2572]: W0913 00:10:29.532667 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.532766 kubelet[2572]: E0913 00:10:29.532687 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.533528 kubelet[2572]: E0913 00:10:29.533351 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.533528 kubelet[2572]: W0913 00:10:29.533371 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.533528 kubelet[2572]: E0913 00:10:29.533389 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.534351 kubelet[2572]: E0913 00:10:29.534135 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.534351 kubelet[2572]: W0913 00:10:29.534158 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.534351 kubelet[2572]: E0913 00:10:29.534176 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.535950 kubelet[2572]: E0913 00:10:29.535932 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.536199 kubelet[2572]: W0913 00:10:29.535998 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.536199 kubelet[2572]: E0913 00:10:29.536019 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.536973 kubelet[2572]: E0913 00:10:29.536783 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.536973 kubelet[2572]: W0913 00:10:29.536802 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.536973 kubelet[2572]: E0913 00:10:29.536818 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.538176 kubelet[2572]: E0913 00:10:29.537758 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.538176 kubelet[2572]: W0913 00:10:29.537780 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.538176 kubelet[2572]: E0913 00:10:29.537798 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.539325 kubelet[2572]: E0913 00:10:29.539098 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.539325 kubelet[2572]: W0913 00:10:29.539118 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.539325 kubelet[2572]: E0913 00:10:29.539143 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.540554 kubelet[2572]: E0913 00:10:29.540168 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.540554 kubelet[2572]: W0913 00:10:29.540187 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.540554 kubelet[2572]: E0913 00:10:29.540217 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.541993 kubelet[2572]: E0913 00:10:29.541558 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.541993 kubelet[2572]: W0913 00:10:29.541578 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.541993 kubelet[2572]: E0913 00:10:29.541596 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.543372 kubelet[2572]: E0913 00:10:29.542606 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.543372 kubelet[2572]: W0913 00:10:29.542626 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.543372 kubelet[2572]: E0913 00:10:29.542646 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.544218 kubelet[2572]: E0913 00:10:29.543898 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.544218 kubelet[2572]: W0913 00:10:29.543917 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.544218 kubelet[2572]: E0913 00:10:29.543935 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.545175 kubelet[2572]: E0913 00:10:29.544704 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.545175 kubelet[2572]: W0913 00:10:29.544727 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.545175 kubelet[2572]: E0913 00:10:29.544757 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.547438 kubelet[2572]: E0913 00:10:29.546512 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.547438 kubelet[2572]: W0913 00:10:29.546656 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.547438 kubelet[2572]: E0913 00:10:29.546677 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.547438 kubelet[2572]: I0913 00:10:29.546753 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbe2014e-59d9-4e26-9bc7-323114f09c1f-kubelet-dir\") pod \"csi-node-driver-st7d7\" (UID: \"dbe2014e-59d9-4e26-9bc7-323114f09c1f\") " pod="calico-system/csi-node-driver-st7d7" Sep 13 00:10:29.550071 kubelet[2572]: E0913 00:10:29.548916 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.550071 kubelet[2572]: W0913 00:10:29.548969 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.550071 kubelet[2572]: E0913 00:10:29.548991 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.551494 kubelet[2572]: E0913 00:10:29.551473 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.551716 kubelet[2572]: W0913 00:10:29.551691 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.551919 kubelet[2572]: E0913 00:10:29.551884 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.553020 kubelet[2572]: E0913 00:10:29.552985 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.553509 kubelet[2572]: W0913 00:10:29.553277 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.553509 kubelet[2572]: E0913 00:10:29.553307 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.553509 kubelet[2572]: I0913 00:10:29.553360 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dbe2014e-59d9-4e26-9bc7-323114f09c1f-varrun\") pod \"csi-node-driver-st7d7\" (UID: \"dbe2014e-59d9-4e26-9bc7-323114f09c1f\") " pod="calico-system/csi-node-driver-st7d7" Sep 13 00:10:29.558370 kubelet[2572]: E0913 00:10:29.557942 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.558370 kubelet[2572]: W0913 00:10:29.557967 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.558370 kubelet[2572]: E0913 00:10:29.557990 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.558370 kubelet[2572]: I0913 00:10:29.558026 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dbe2014e-59d9-4e26-9bc7-323114f09c1f-socket-dir\") pod \"csi-node-driver-st7d7\" (UID: \"dbe2014e-59d9-4e26-9bc7-323114f09c1f\") " pod="calico-system/csi-node-driver-st7d7" Sep 13 00:10:29.559620 kubelet[2572]: E0913 00:10:29.559096 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.559620 kubelet[2572]: W0913 00:10:29.559120 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.559620 kubelet[2572]: E0913 00:10:29.559166 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.559620 kubelet[2572]: I0913 00:10:29.559193 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc5zp\" (UniqueName: \"kubernetes.io/projected/dbe2014e-59d9-4e26-9bc7-323114f09c1f-kube-api-access-qc5zp\") pod \"csi-node-driver-st7d7\" (UID: \"dbe2014e-59d9-4e26-9bc7-323114f09c1f\") " pod="calico-system/csi-node-driver-st7d7" Sep 13 00:10:29.560446 kubelet[2572]: E0913 00:10:29.560129 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.560446 kubelet[2572]: W0913 00:10:29.560151 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.560446 kubelet[2572]: E0913 00:10:29.560179 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.560508 systemd[1]: Started cri-containerd-d4161a9de7ce764fa27a90a1be993d0f7a73d23350bfd86b6cb18db38211a984.scope - libcontainer container d4161a9de7ce764fa27a90a1be993d0f7a73d23350bfd86b6cb18db38211a984. Sep 13 00:10:29.564250 kubelet[2572]: E0913 00:10:29.563004 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.564250 kubelet[2572]: W0913 00:10:29.563024 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.564250 kubelet[2572]: E0913 00:10:29.563719 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.564250 kubelet[2572]: E0913 00:10:29.564166 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.564250 kubelet[2572]: W0913 00:10:29.564183 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.565386 kubelet[2572]: E0913 00:10:29.564843 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.565386 kubelet[2572]: E0913 00:10:29.565187 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.565386 kubelet[2572]: W0913 00:10:29.565200 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.565386 kubelet[2572]: E0913 00:10:29.565267 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.565386 kubelet[2572]: I0913 00:10:29.565302 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dbe2014e-59d9-4e26-9bc7-323114f09c1f-registration-dir\") pod \"csi-node-driver-st7d7\" (UID: \"dbe2014e-59d9-4e26-9bc7-323114f09c1f\") " pod="calico-system/csi-node-driver-st7d7" Sep 13 00:10:29.566180 kubelet[2572]: E0913 00:10:29.566046 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.566180 kubelet[2572]: W0913 00:10:29.566064 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.567103 kubelet[2572]: E0913 00:10:29.566472 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.567103 kubelet[2572]: E0913 00:10:29.567042 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.567103 kubelet[2572]: W0913 00:10:29.567061 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.567103 kubelet[2572]: E0913 00:10:29.567078 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.568505 kubelet[2572]: E0913 00:10:29.568040 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.568505 kubelet[2572]: W0913 00:10:29.568059 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.568505 kubelet[2572]: E0913 00:10:29.568447 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.570620 kubelet[2572]: E0913 00:10:29.570078 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.570620 kubelet[2572]: W0913 00:10:29.570099 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.570620 kubelet[2572]: E0913 00:10:29.570117 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.571250 kubelet[2572]: E0913 00:10:29.571152 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.571250 kubelet[2572]: W0913 00:10:29.571171 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.571250 kubelet[2572]: E0913 00:10:29.571189 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.671076 containerd[1464]: time="2025-09-13T00:10:29.670217739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wgqsq,Uid:40c3fd1d-de27-482b-ad30-da741a908c6f,Namespace:calico-system,Attempt:0,} returns sandbox id \"d4161a9de7ce764fa27a90a1be993d0f7a73d23350bfd86b6cb18db38211a984\"" Sep 13 00:10:29.673778 kubelet[2572]: E0913 00:10:29.672550 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.673778 kubelet[2572]: W0913 00:10:29.672574 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.673778 kubelet[2572]: E0913 00:10:29.672604 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.673778 kubelet[2572]: E0913 00:10:29.673550 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.673778 kubelet[2572]: W0913 00:10:29.673568 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.674129 kubelet[2572]: E0913 00:10:29.673691 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.675883 kubelet[2572]: E0913 00:10:29.674449 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.675883 kubelet[2572]: W0913 00:10:29.674492 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.675883 kubelet[2572]: E0913 00:10:29.674532 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.675883 kubelet[2572]: E0913 00:10:29.675008 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.675883 kubelet[2572]: W0913 00:10:29.675052 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.675883 kubelet[2572]: E0913 00:10:29.675070 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.675883 kubelet[2572]: E0913 00:10:29.675549 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.675883 kubelet[2572]: W0913 00:10:29.675565 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.675883 kubelet[2572]: E0913 00:10:29.675584 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.676464 kubelet[2572]: E0913 00:10:29.676001 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.676464 kubelet[2572]: W0913 00:10:29.676018 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.676464 kubelet[2572]: E0913 00:10:29.676069 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.676637 kubelet[2572]: E0913 00:10:29.676588 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.676637 kubelet[2572]: W0913 00:10:29.676604 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.676637 kubelet[2572]: E0913 00:10:29.676621 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.682200 kubelet[2572]: E0913 00:10:29.676933 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.682200 kubelet[2572]: W0913 00:10:29.676952 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.682200 kubelet[2572]: E0913 00:10:29.676969 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.682200 kubelet[2572]: E0913 00:10:29.677480 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.682200 kubelet[2572]: W0913 00:10:29.677495 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.682200 kubelet[2572]: E0913 00:10:29.677512 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.682200 kubelet[2572]: E0913 00:10:29.677898 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.682200 kubelet[2572]: W0913 00:10:29.677926 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.682200 kubelet[2572]: E0913 00:10:29.677944 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.682200 kubelet[2572]: E0913 00:10:29.678389 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.683549 kubelet[2572]: W0913 00:10:29.678404 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.683549 kubelet[2572]: E0913 00:10:29.678422 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.683549 kubelet[2572]: E0913 00:10:29.678759 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.683549 kubelet[2572]: W0913 00:10:29.678802 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.683549 kubelet[2572]: E0913 00:10:29.678820 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.683549 kubelet[2572]: E0913 00:10:29.679326 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.683549 kubelet[2572]: W0913 00:10:29.679340 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.683549 kubelet[2572]: E0913 00:10:29.679356 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.683549 kubelet[2572]: E0913 00:10:29.679758 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.683549 kubelet[2572]: W0913 00:10:29.679775 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.685779 kubelet[2572]: E0913 00:10:29.679809 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.685779 kubelet[2572]: E0913 00:10:29.680358 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.685779 kubelet[2572]: W0913 00:10:29.680374 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.685779 kubelet[2572]: E0913 00:10:29.680416 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.685779 kubelet[2572]: E0913 00:10:29.680800 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.685779 kubelet[2572]: W0913 00:10:29.681030 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.685779 kubelet[2572]: E0913 00:10:29.681235 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.685779 kubelet[2572]: E0913 00:10:29.681790 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.685779 kubelet[2572]: W0913 00:10:29.681805 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.685779 kubelet[2572]: E0913 00:10:29.683151 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.690367 kubelet[2572]: W0913 00:10:29.683167 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.690367 kubelet[2572]: E0913 00:10:29.684060 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.690367 kubelet[2572]: E0913 00:10:29.684099 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.690367 kubelet[2572]: E0913 00:10:29.684133 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.690367 kubelet[2572]: W0913 00:10:29.684217 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.690367 kubelet[2572]: E0913 00:10:29.684828 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.690367 kubelet[2572]: E0913 00:10:29.685210 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.690367 kubelet[2572]: W0913 00:10:29.685242 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.690367 kubelet[2572]: E0913 00:10:29.685302 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.690367 kubelet[2572]: E0913 00:10:29.685763 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.690832 kubelet[2572]: W0913 00:10:29.685778 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.690832 kubelet[2572]: E0913 00:10:29.685914 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.690832 kubelet[2572]: E0913 00:10:29.686520 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.690832 kubelet[2572]: W0913 00:10:29.686535 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.690832 kubelet[2572]: E0913 00:10:29.686697 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.690832 kubelet[2572]: E0913 00:10:29.687074 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.690832 kubelet[2572]: W0913 00:10:29.687100 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.690832 kubelet[2572]: E0913 00:10:29.687433 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.690832 kubelet[2572]: E0913 00:10:29.687751 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.690832 kubelet[2572]: W0913 00:10:29.687766 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.691106 kubelet[2572]: E0913 00:10:29.687920 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.691106 kubelet[2572]: E0913 00:10:29.688172 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.691106 kubelet[2572]: W0913 00:10:29.688189 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.691106 kubelet[2572]: E0913 00:10:29.688206 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:29.711004 kubelet[2572]: E0913 00:10:29.710958 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:29.711004 kubelet[2572]: W0913 00:10:29.710997 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:29.711241 kubelet[2572]: E0913 00:10:29.711026 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:30.329171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3556307014.mount: Deactivated successfully. Sep 13 00:10:31.329961 kubelet[2572]: E0913 00:10:31.329870 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-st7d7" podUID="dbe2014e-59d9-4e26-9bc7-323114f09c1f" Sep 13 00:10:31.718094 containerd[1464]: time="2025-09-13T00:10:31.718013380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:31.720475 containerd[1464]: time="2025-09-13T00:10:31.720007884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:10:31.721554 containerd[1464]: time="2025-09-13T00:10:31.721506659Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:31.726703 containerd[1464]: time="2025-09-13T00:10:31.726654863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:31.728827 containerd[1464]: time="2025-09-13T00:10:31.727573261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.3816622s" Sep 13 00:10:31.728827 containerd[1464]: time="2025-09-13T00:10:31.727630311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:10:31.733521 containerd[1464]: time="2025-09-13T00:10:31.733468594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:10:31.756661 containerd[1464]: time="2025-09-13T00:10:31.756602746Z" level=info msg="CreateContainer within sandbox \"f11dd3e6292772738744bf24e8bb36971c989b62eb4a17c1f45e39f0c2f0ff72\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:10:31.780812 containerd[1464]: time="2025-09-13T00:10:31.780723514Z" level=info msg="CreateContainer within sandbox \"f11dd3e6292772738744bf24e8bb36971c989b62eb4a17c1f45e39f0c2f0ff72\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"06f141d4c8e05f595bcb72209dc630f23dbcefe551b8428fc3401019babc1f5b\"" Sep 13 00:10:31.783327 containerd[1464]: time="2025-09-13T00:10:31.783249880Z" level=info msg="StartContainer for \"06f141d4c8e05f595bcb72209dc630f23dbcefe551b8428fc3401019babc1f5b\"" Sep 13 00:10:31.834524 systemd[1]: Started cri-containerd-06f141d4c8e05f595bcb72209dc630f23dbcefe551b8428fc3401019babc1f5b.scope - libcontainer container 06f141d4c8e05f595bcb72209dc630f23dbcefe551b8428fc3401019babc1f5b. Sep 13 00:10:31.935162 containerd[1464]: time="2025-09-13T00:10:31.934531249Z" level=info msg="StartContainer for \"06f141d4c8e05f595bcb72209dc630f23dbcefe551b8428fc3401019babc1f5b\" returns successfully" Sep 13 00:10:32.468265 kubelet[2572]: E0913 00:10:32.467166 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.468265 kubelet[2572]: W0913 00:10:32.467206 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.469703 kubelet[2572]: E0913 00:10:32.468952 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.469703 kubelet[2572]: E0913 00:10:32.469505 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.469703 kubelet[2572]: W0913 00:10:32.469529 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.470418 kubelet[2572]: E0913 00:10:32.469559 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.473068 kubelet[2572]: E0913 00:10:32.472934 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.473068 kubelet[2572]: W0913 00:10:32.472960 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.473068 kubelet[2572]: E0913 00:10:32.473007 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.475256 kubelet[2572]: E0913 00:10:32.475115 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.475256 kubelet[2572]: W0913 00:10:32.475166 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.475256 kubelet[2572]: E0913 00:10:32.475193 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.478851 kubelet[2572]: E0913 00:10:32.478662 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.478851 kubelet[2572]: W0913 00:10:32.478792 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.478851 kubelet[2572]: E0913 00:10:32.478818 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.481366 kubelet[2572]: E0913 00:10:32.481189 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.481366 kubelet[2572]: W0913 00:10:32.481213 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.481366 kubelet[2572]: E0913 00:10:32.481267 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.484797 kubelet[2572]: E0913 00:10:32.484513 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.484797 kubelet[2572]: W0913 00:10:32.484539 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.484797 kubelet[2572]: E0913 00:10:32.484564 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.485511 kubelet[2572]: E0913 00:10:32.485406 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.485511 kubelet[2572]: W0913 00:10:32.485452 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.485511 kubelet[2572]: E0913 00:10:32.485474 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.488293 kubelet[2572]: E0913 00:10:32.487398 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.488293 kubelet[2572]: W0913 00:10:32.487422 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.488293 kubelet[2572]: E0913 00:10:32.487444 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.488872 kubelet[2572]: E0913 00:10:32.488724 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.488872 kubelet[2572]: W0913 00:10:32.488766 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.488872 kubelet[2572]: E0913 00:10:32.488786 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.489610 kubelet[2572]: E0913 00:10:32.489440 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.489610 kubelet[2572]: W0913 00:10:32.489459 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.489610 kubelet[2572]: E0913 00:10:32.489476 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.491525 kubelet[2572]: E0913 00:10:32.491399 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.491525 kubelet[2572]: W0913 00:10:32.491421 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.491525 kubelet[2572]: E0913 00:10:32.491463 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.492158 kubelet[2572]: E0913 00:10:32.492091 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.492158 kubelet[2572]: W0913 00:10:32.492110 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.492158 kubelet[2572]: E0913 00:10:32.492127 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.492879 kubelet[2572]: E0913 00:10:32.492810 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.492879 kubelet[2572]: W0913 00:10:32.492831 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.492879 kubelet[2572]: E0913 00:10:32.492849 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.493572 kubelet[2572]: E0913 00:10:32.493504 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.493572 kubelet[2572]: W0913 00:10:32.493524 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.493572 kubelet[2572]: E0913 00:10:32.493542 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.500788 kubelet[2572]: E0913 00:10:32.500557 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.500788 kubelet[2572]: W0913 00:10:32.500626 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.501596 kubelet[2572]: E0913 00:10:32.500655 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.503433 kubelet[2572]: E0913 00:10:32.502557 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.503433 kubelet[2572]: W0913 00:10:32.502580 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.503778 kubelet[2572]: E0913 00:10:32.503654 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.504340 kubelet[2572]: E0913 00:10:32.504278 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.504340 kubelet[2572]: W0913 00:10:32.504298 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.504806 kubelet[2572]: E0913 00:10:32.504764 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.506546 kubelet[2572]: E0913 00:10:32.506207 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.506546 kubelet[2572]: W0913 00:10:32.506254 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.506884 kubelet[2572]: E0913 00:10:32.506613 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.508048 kubelet[2572]: E0913 00:10:32.507868 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.508048 kubelet[2572]: W0913 00:10:32.507893 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.508592 kubelet[2572]: E0913 00:10:32.508266 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.508982 kubelet[2572]: E0913 00:10:32.508892 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.509464 kubelet[2572]: W0913 00:10:32.509384 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.509912 kubelet[2572]: E0913 00:10:32.509746 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.511758 kubelet[2572]: E0913 00:10:32.511737 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.512118 kubelet[2572]: W0913 00:10:32.511891 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.512535 kubelet[2572]: E0913 00:10:32.512330 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.513972 kubelet[2572]: E0913 00:10:32.513461 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.513972 kubelet[2572]: W0913 00:10:32.513484 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.513972 kubelet[2572]: E0913 00:10:32.513620 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.515654 kubelet[2572]: E0913 00:10:32.515633 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.517384 kubelet[2572]: W0913 00:10:32.517271 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.517665 kubelet[2572]: E0913 00:10:32.517451 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.518147 kubelet[2572]: E0913 00:10:32.517999 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.518147 kubelet[2572]: W0913 00:10:32.518018 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.518486 kubelet[2572]: E0913 00:10:32.518309 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.518634 kubelet[2572]: E0913 00:10:32.518620 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.518736 kubelet[2572]: W0913 00:10:32.518720 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.518978 kubelet[2572]: E0913 00:10:32.518930 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.521252 kubelet[2572]: E0913 00:10:32.519294 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.521252 kubelet[2572]: W0913 00:10:32.519311 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.521625 kubelet[2572]: E0913 00:10:32.521421 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.521815 kubelet[2572]: E0913 00:10:32.521799 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.521934 kubelet[2572]: W0913 00:10:32.521916 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.522215 kubelet[2572]: E0913 00:10:32.522016 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.522844 kubelet[2572]: E0913 00:10:32.522546 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.522844 kubelet[2572]: W0913 00:10:32.522562 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.522844 kubelet[2572]: E0913 00:10:32.522610 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.524576 kubelet[2572]: E0913 00:10:32.523761 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.524576 kubelet[2572]: W0913 00:10:32.523780 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.524576 kubelet[2572]: E0913 00:10:32.523805 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.525100 kubelet[2572]: E0913 00:10:32.525057 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.525100 kubelet[2572]: W0913 00:10:32.525076 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.526713 kubelet[2572]: E0913 00:10:32.526686 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.527069 kubelet[2572]: E0913 00:10:32.526877 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.527069 kubelet[2572]: W0913 00:10:32.527004 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.527069 kubelet[2572]: E0913 00:10:32.527026 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.527824 kubelet[2572]: E0913 00:10:32.527809 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:32.528059 kubelet[2572]: W0913 00:10:32.528038 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:32.529308 kubelet[2572]: E0913 00:10:32.529281 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:32.964856 containerd[1464]: time="2025-09-13T00:10:32.964746703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:32.966761 containerd[1464]: time="2025-09-13T00:10:32.966665388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:10:32.968257 containerd[1464]: time="2025-09-13T00:10:32.968132070Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:32.973735 containerd[1464]: time="2025-09-13T00:10:32.973658050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:32.974779 containerd[1464]: time="2025-09-13T00:10:32.974725021Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.241051701s" Sep 13 00:10:32.974926 containerd[1464]: time="2025-09-13T00:10:32.974783181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:10:32.979976 containerd[1464]: time="2025-09-13T00:10:32.979901549Z" level=info msg="CreateContainer within sandbox \"d4161a9de7ce764fa27a90a1be993d0f7a73d23350bfd86b6cb18db38211a984\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:10:33.007123 containerd[1464]: time="2025-09-13T00:10:33.007043752Z" level=info msg="CreateContainer within sandbox \"d4161a9de7ce764fa27a90a1be993d0f7a73d23350bfd86b6cb18db38211a984\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a4ce8d30647eee8838c47e2b6d2bd80cebd69b6d6204bc20666c41f111695897\"" Sep 13 00:10:33.008027 containerd[1464]: time="2025-09-13T00:10:33.007979607Z" level=info msg="StartContainer for \"a4ce8d30647eee8838c47e2b6d2bd80cebd69b6d6204bc20666c41f111695897\"" Sep 13 00:10:33.074551 systemd[1]: Started cri-containerd-a4ce8d30647eee8838c47e2b6d2bd80cebd69b6d6204bc20666c41f111695897.scope - libcontainer container a4ce8d30647eee8838c47e2b6d2bd80cebd69b6d6204bc20666c41f111695897. Sep 13 00:10:33.119813 containerd[1464]: time="2025-09-13T00:10:33.118579430Z" level=info msg="StartContainer for \"a4ce8d30647eee8838c47e2b6d2bd80cebd69b6d6204bc20666c41f111695897\" returns successfully" Sep 13 00:10:33.144508 systemd[1]: cri-containerd-a4ce8d30647eee8838c47e2b6d2bd80cebd69b6d6204bc20666c41f111695897.scope: Deactivated successfully. Sep 13 00:10:33.191461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4ce8d30647eee8838c47e2b6d2bd80cebd69b6d6204bc20666c41f111695897-rootfs.mount: Deactivated successfully. Sep 13 00:10:33.330700 kubelet[2572]: E0913 00:10:33.330108 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-st7d7" podUID="dbe2014e-59d9-4e26-9bc7-323114f09c1f" Sep 13 00:10:33.454848 kubelet[2572]: I0913 00:10:33.454143 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:10:33.481267 kubelet[2572]: I0913 00:10:33.480309 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54995c46f8-n8f8p" podStartSLOduration=3.09463492 podStartE2EDuration="5.480283s" podCreationTimestamp="2025-09-13 00:10:28 +0000 UTC" firstStartedPulling="2025-09-13 00:10:29.344842747 +0000 UTC m=+21.218859694" lastFinishedPulling="2025-09-13 00:10:31.730490818 +0000 UTC m=+23.604507774" observedRunningTime="2025-09-13 00:10:32.557859554 +0000 UTC m=+24.431876516" watchObservedRunningTime="2025-09-13 00:10:33.480283 +0000 UTC m=+25.354299964" Sep 13 00:10:33.872573 containerd[1464]: time="2025-09-13T00:10:33.872284486Z" level=info msg="shim disconnected" id=a4ce8d30647eee8838c47e2b6d2bd80cebd69b6d6204bc20666c41f111695897 namespace=k8s.io Sep 13 00:10:33.872573 containerd[1464]: time="2025-09-13T00:10:33.872385655Z" level=warning msg="cleaning up after shim disconnected" id=a4ce8d30647eee8838c47e2b6d2bd80cebd69b6d6204bc20666c41f111695897 namespace=k8s.io Sep 13 00:10:33.872573 containerd[1464]: time="2025-09-13T00:10:33.872401482Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:10:34.463556 containerd[1464]: time="2025-09-13T00:10:34.463498724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:10:35.329589 kubelet[2572]: E0913 00:10:35.329517 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-st7d7" podUID="dbe2014e-59d9-4e26-9bc7-323114f09c1f" Sep 13 00:10:37.329892 kubelet[2572]: E0913 00:10:37.329821 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-st7d7" podUID="dbe2014e-59d9-4e26-9bc7-323114f09c1f" Sep 13 00:10:37.876874 containerd[1464]: time="2025-09-13T00:10:37.876796131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:37.878261 containerd[1464]: time="2025-09-13T00:10:37.878159243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:10:37.879932 containerd[1464]: time="2025-09-13T00:10:37.879857404Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:37.884280 containerd[1464]: time="2025-09-13T00:10:37.883569513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:37.885498 containerd[1464]: time="2025-09-13T00:10:37.884483186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.420920164s" Sep 13 00:10:37.885498 containerd[1464]: time="2025-09-13T00:10:37.884530962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:10:37.888174 containerd[1464]: time="2025-09-13T00:10:37.887904331Z" level=info msg="CreateContainer within sandbox \"d4161a9de7ce764fa27a90a1be993d0f7a73d23350bfd86b6cb18db38211a984\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:10:37.909596 containerd[1464]: time="2025-09-13T00:10:37.909490430Z" level=info msg="CreateContainer within sandbox \"d4161a9de7ce764fa27a90a1be993d0f7a73d23350bfd86b6cb18db38211a984\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3339ea698ecdd21271101581f178219036b3a624389c4dabbedd3dfadfdd0e08\"" Sep 13 00:10:37.912927 containerd[1464]: time="2025-09-13T00:10:37.911090101Z" level=info msg="StartContainer for \"3339ea698ecdd21271101581f178219036b3a624389c4dabbedd3dfadfdd0e08\"" Sep 13 00:10:37.965632 systemd[1]: run-containerd-runc-k8s.io-3339ea698ecdd21271101581f178219036b3a624389c4dabbedd3dfadfdd0e08-runc.KXtZJr.mount: Deactivated successfully. Sep 13 00:10:37.975493 systemd[1]: Started cri-containerd-3339ea698ecdd21271101581f178219036b3a624389c4dabbedd3dfadfdd0e08.scope - libcontainer container 3339ea698ecdd21271101581f178219036b3a624389c4dabbedd3dfadfdd0e08. Sep 13 00:10:38.017711 containerd[1464]: time="2025-09-13T00:10:38.017610931Z" level=info msg="StartContainer for \"3339ea698ecdd21271101581f178219036b3a624389c4dabbedd3dfadfdd0e08\" returns successfully" Sep 13 00:10:39.089079 containerd[1464]: time="2025-09-13T00:10:39.089017482Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:10:39.093250 systemd[1]: cri-containerd-3339ea698ecdd21271101581f178219036b3a624389c4dabbedd3dfadfdd0e08.scope: Deactivated successfully. Sep 13 00:10:39.132739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3339ea698ecdd21271101581f178219036b3a624389c4dabbedd3dfadfdd0e08-rootfs.mount: Deactivated successfully. Sep 13 00:10:39.135788 kubelet[2572]: I0913 00:10:39.134489 2572 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:10:39.199627 systemd[1]: Created slice kubepods-burstable-pod6cd4a1a8_fc8a_4205_a62b_c1d1dd0abb8f.slice - libcontainer container kubepods-burstable-pod6cd4a1a8_fc8a_4205_a62b_c1d1dd0abb8f.slice. Sep 13 00:10:39.219179 systemd[1]: Created slice kubepods-burstable-pod18c29eae_9c67_41af_9566_6e188885af9e.slice - libcontainer container kubepods-burstable-pod18c29eae_9c67_41af_9566_6e188885af9e.slice. Sep 13 00:10:39.245352 systemd[1]: Created slice kubepods-besteffort-podcdf14ae2_e5cc_4d6f_8ee1_298a0fa2d089.slice - libcontainer container kubepods-besteffort-podcdf14ae2_e5cc_4d6f_8ee1_298a0fa2d089.slice. Sep 13 00:10:39.262300 systemd[1]: Created slice kubepods-besteffort-pod7e85b06c_dd91_46b3_8259_324d7b213ab5.slice - libcontainer container kubepods-besteffort-pod7e85b06c_dd91_46b3_8259_324d7b213ab5.slice. Sep 13 00:10:39.268273 kubelet[2572]: I0913 00:10:39.266779 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wn7w\" (UniqueName: \"kubernetes.io/projected/a122eeee-1c48-4d4d-8b11-6b176f51757c-kube-api-access-7wn7w\") pod \"calico-apiserver-858b45798c-wf79q\" (UID: \"a122eeee-1c48-4d4d-8b11-6b176f51757c\") " pod="calico-apiserver/calico-apiserver-858b45798c-wf79q" Sep 13 00:10:39.268273 kubelet[2572]: I0913 00:10:39.266880 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e85b06c-dd91-46b3-8259-324d7b213ab5-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-xlg2k\" (UID: \"7e85b06c-dd91-46b3-8259-324d7b213ab5\") " pod="calico-system/goldmane-54d579b49d-xlg2k" Sep 13 00:10:39.268273 kubelet[2572]: I0913 00:10:39.266913 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg8zz\" (UniqueName: \"kubernetes.io/projected/18c29eae-9c67-41af-9566-6e188885af9e-kube-api-access-jg8zz\") pod \"coredns-668d6bf9bc-7lkdb\" (UID: \"18c29eae-9c67-41af-9566-6e188885af9e\") " pod="kube-system/coredns-668d6bf9bc-7lkdb" Sep 13 00:10:39.268273 kubelet[2572]: I0913 00:10:39.266965 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f-config-volume\") pod \"coredns-668d6bf9bc-6sxxd\" (UID: \"6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f\") " pod="kube-system/coredns-668d6bf9bc-6sxxd" Sep 13 00:10:39.268273 kubelet[2572]: I0913 00:10:39.266995 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7vkd\" (UniqueName: \"kubernetes.io/projected/87c611c9-e5c6-4577-8012-22c2c46a1517-kube-api-access-n7vkd\") pod \"whisker-5df6d656dc-wgt8k\" (UID: \"87c611c9-e5c6-4577-8012-22c2c46a1517\") " pod="calico-system/whisker-5df6d656dc-wgt8k" Sep 13 00:10:39.268658 kubelet[2572]: I0913 00:10:39.267024 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7e85b06c-dd91-46b3-8259-324d7b213ab5-goldmane-key-pair\") pod \"goldmane-54d579b49d-xlg2k\" (UID: \"7e85b06c-dd91-46b3-8259-324d7b213ab5\") " pod="calico-system/goldmane-54d579b49d-xlg2k" Sep 13 00:10:39.268658 kubelet[2572]: I0913 00:10:39.267051 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj7dq\" (UniqueName: \"kubernetes.io/projected/7e85b06c-dd91-46b3-8259-324d7b213ab5-kube-api-access-hj7dq\") pod \"goldmane-54d579b49d-xlg2k\" (UID: \"7e85b06c-dd91-46b3-8259-324d7b213ab5\") " pod="calico-system/goldmane-54d579b49d-xlg2k" Sep 13 00:10:39.268658 kubelet[2572]: I0913 00:10:39.267081 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089-tigera-ca-bundle\") pod \"calico-kube-controllers-5b44c8449-j7495\" (UID: \"cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089\") " pod="calico-system/calico-kube-controllers-5b44c8449-j7495" Sep 13 00:10:39.268658 kubelet[2572]: I0913 00:10:39.267113 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e85b06c-dd91-46b3-8259-324d7b213ab5-config\") pod \"goldmane-54d579b49d-xlg2k\" (UID: \"7e85b06c-dd91-46b3-8259-324d7b213ab5\") " pod="calico-system/goldmane-54d579b49d-xlg2k" Sep 13 00:10:39.268658 kubelet[2572]: I0913 00:10:39.267144 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18c29eae-9c67-41af-9566-6e188885af9e-config-volume\") pod \"coredns-668d6bf9bc-7lkdb\" (UID: \"18c29eae-9c67-41af-9566-6e188885af9e\") " pod="kube-system/coredns-668d6bf9bc-7lkdb" Sep 13 00:10:39.268934 kubelet[2572]: I0913 00:10:39.267173 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j6qc\" (UniqueName: \"kubernetes.io/projected/4bc4ce4d-5c95-4963-89e0-e92aa88cde6e-kube-api-access-6j6qc\") pod \"calico-apiserver-858b45798c-lxwx2\" (UID: \"4bc4ce4d-5c95-4963-89e0-e92aa88cde6e\") " pod="calico-apiserver/calico-apiserver-858b45798c-lxwx2" Sep 13 00:10:39.268934 kubelet[2572]: I0913 00:10:39.267206 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87c611c9-e5c6-4577-8012-22c2c46a1517-whisker-backend-key-pair\") pod \"whisker-5df6d656dc-wgt8k\" (UID: \"87c611c9-e5c6-4577-8012-22c2c46a1517\") " pod="calico-system/whisker-5df6d656dc-wgt8k" Sep 13 00:10:39.268934 kubelet[2572]: I0913 00:10:39.267729 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87c611c9-e5c6-4577-8012-22c2c46a1517-whisker-ca-bundle\") pod \"whisker-5df6d656dc-wgt8k\" (UID: \"87c611c9-e5c6-4577-8012-22c2c46a1517\") " pod="calico-system/whisker-5df6d656dc-wgt8k" Sep 13 00:10:39.268934 kubelet[2572]: I0913 00:10:39.267803 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrmf2\" (UniqueName: \"kubernetes.io/projected/6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f-kube-api-access-vrmf2\") pod \"coredns-668d6bf9bc-6sxxd\" (UID: \"6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f\") " pod="kube-system/coredns-668d6bf9bc-6sxxd" Sep 13 00:10:39.268934 kubelet[2572]: I0913 00:10:39.267846 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjskp\" (UniqueName: \"kubernetes.io/projected/cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089-kube-api-access-vjskp\") pod \"calico-kube-controllers-5b44c8449-j7495\" (UID: \"cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089\") " pod="calico-system/calico-kube-controllers-5b44c8449-j7495" Sep 13 00:10:39.269194 kubelet[2572]: I0913 00:10:39.267876 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a122eeee-1c48-4d4d-8b11-6b176f51757c-calico-apiserver-certs\") pod \"calico-apiserver-858b45798c-wf79q\" (UID: \"a122eeee-1c48-4d4d-8b11-6b176f51757c\") " pod="calico-apiserver/calico-apiserver-858b45798c-wf79q" Sep 13 00:10:39.269194 kubelet[2572]: I0913 00:10:39.267909 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4bc4ce4d-5c95-4963-89e0-e92aa88cde6e-calico-apiserver-certs\") pod \"calico-apiserver-858b45798c-lxwx2\" (UID: \"4bc4ce4d-5c95-4963-89e0-e92aa88cde6e\") " pod="calico-apiserver/calico-apiserver-858b45798c-lxwx2" Sep 13 00:10:39.269194 kubelet[2572]: I0913 00:10:39.267954 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1b28393a-bc83-414b-b383-70eb716d64ee-calico-apiserver-certs\") pod \"calico-apiserver-77b67c64b5-zwfwq\" (UID: \"1b28393a-bc83-414b-b383-70eb716d64ee\") " pod="calico-apiserver/calico-apiserver-77b67c64b5-zwfwq" Sep 13 00:10:39.269194 kubelet[2572]: I0913 00:10:39.267983 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbkwg\" (UniqueName: \"kubernetes.io/projected/1b28393a-bc83-414b-b383-70eb716d64ee-kube-api-access-rbkwg\") pod \"calico-apiserver-77b67c64b5-zwfwq\" (UID: \"1b28393a-bc83-414b-b383-70eb716d64ee\") " pod="calico-apiserver/calico-apiserver-77b67c64b5-zwfwq" Sep 13 00:10:39.278064 systemd[1]: Created slice kubepods-besteffort-pod87c611c9_e5c6_4577_8012_22c2c46a1517.slice - libcontainer container kubepods-besteffort-pod87c611c9_e5c6_4577_8012_22c2c46a1517.slice. Sep 13 00:10:39.291375 systemd[1]: Created slice kubepods-besteffort-pod4bc4ce4d_5c95_4963_89e0_e92aa88cde6e.slice - libcontainer container kubepods-besteffort-pod4bc4ce4d_5c95_4963_89e0_e92aa88cde6e.slice. Sep 13 00:10:39.303716 systemd[1]: Created slice kubepods-besteffort-pod1b28393a_bc83_414b_b383_70eb716d64ee.slice - libcontainer container kubepods-besteffort-pod1b28393a_bc83_414b_b383_70eb716d64ee.slice. Sep 13 00:10:39.324090 systemd[1]: Created slice kubepods-besteffort-poda122eeee_1c48_4d4d_8b11_6b176f51757c.slice - libcontainer container kubepods-besteffort-poda122eeee_1c48_4d4d_8b11_6b176f51757c.slice. Sep 13 00:10:39.342056 systemd[1]: Created slice kubepods-besteffort-poddbe2014e_59d9_4e26_9bc7_323114f09c1f.slice - libcontainer container kubepods-besteffort-poddbe2014e_59d9_4e26_9bc7_323114f09c1f.slice. Sep 13 00:10:39.397478 containerd[1464]: time="2025-09-13T00:10:39.397004744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-st7d7,Uid:dbe2014e-59d9-4e26-9bc7-323114f09c1f,Namespace:calico-system,Attempt:0,}" Sep 13 00:10:39.515714 containerd[1464]: time="2025-09-13T00:10:39.515656626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6sxxd,Uid:6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:39.529851 containerd[1464]: time="2025-09-13T00:10:39.529785651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lkdb,Uid:18c29eae-9c67-41af-9566-6e188885af9e,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:39.552821 containerd[1464]: time="2025-09-13T00:10:39.552665397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b44c8449-j7495,Uid:cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089,Namespace:calico-system,Attempt:0,}" Sep 13 00:10:39.568220 containerd[1464]: time="2025-09-13T00:10:39.568159177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xlg2k,Uid:7e85b06c-dd91-46b3-8259-324d7b213ab5,Namespace:calico-system,Attempt:0,}" Sep 13 00:10:39.676476 containerd[1464]: time="2025-09-13T00:10:39.676295830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df6d656dc-wgt8k,Uid:87c611c9-e5c6-4577-8012-22c2c46a1517,Namespace:calico-system,Attempt:0,}" Sep 13 00:10:39.686894 containerd[1464]: time="2025-09-13T00:10:39.686825304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858b45798c-lxwx2,Uid:4bc4ce4d-5c95-4963-89e0-e92aa88cde6e,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:10:39.689013 containerd[1464]: time="2025-09-13T00:10:39.688089557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858b45798c-wf79q,Uid:a122eeee-1c48-4d4d-8b11-6b176f51757c,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:10:39.692992 containerd[1464]: time="2025-09-13T00:10:39.692858705Z" level=info msg="shim disconnected" id=3339ea698ecdd21271101581f178219036b3a624389c4dabbedd3dfadfdd0e08 namespace=k8s.io Sep 13 00:10:39.692992 containerd[1464]: time="2025-09-13T00:10:39.692993717Z" level=warning msg="cleaning up after shim disconnected" id=3339ea698ecdd21271101581f178219036b3a624389c4dabbedd3dfadfdd0e08 namespace=k8s.io Sep 13 00:10:39.693215 containerd[1464]: time="2025-09-13T00:10:39.693013756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:10:39.699629 containerd[1464]: time="2025-09-13T00:10:39.699322018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b67c64b5-zwfwq,Uid:1b28393a-bc83-414b-b383-70eb716d64ee,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:10:40.133339 containerd[1464]: time="2025-09-13T00:10:40.132458253Z" level=error msg="Failed to destroy network for sandbox \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.137268 containerd[1464]: time="2025-09-13T00:10:40.135564497Z" level=error msg="encountered an error cleaning up failed sandbox \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.137624 containerd[1464]: time="2025-09-13T00:10:40.137565446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-st7d7,Uid:dbe2014e-59d9-4e26-9bc7-323114f09c1f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.142440 kubelet[2572]: E0913 00:10:40.138504 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.142440 kubelet[2572]: E0913 00:10:40.138683 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-st7d7" Sep 13 00:10:40.142440 kubelet[2572]: E0913 00:10:40.138723 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-st7d7" Sep 13 00:10:40.143119 kubelet[2572]: E0913 00:10:40.139031 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-st7d7_calico-system(dbe2014e-59d9-4e26-9bc7-323114f09c1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-st7d7_calico-system(dbe2014e-59d9-4e26-9bc7-323114f09c1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-st7d7" podUID="dbe2014e-59d9-4e26-9bc7-323114f09c1f" Sep 13 00:10:40.207632 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41-shm.mount: Deactivated successfully. Sep 13 00:10:40.293693 containerd[1464]: time="2025-09-13T00:10:40.293490564Z" level=error msg="Failed to destroy network for sandbox \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.295959 containerd[1464]: time="2025-09-13T00:10:40.294639088Z" level=error msg="encountered an error cleaning up failed sandbox \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.296746 containerd[1464]: time="2025-09-13T00:10:40.296282675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b44c8449-j7495,Uid:cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.301878 kubelet[2572]: E0913 00:10:40.298496 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.301878 kubelet[2572]: E0913 00:10:40.298586 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b44c8449-j7495" Sep 13 00:10:40.301878 kubelet[2572]: E0913 00:10:40.298623 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b44c8449-j7495" Sep 13 00:10:40.300440 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd-shm.mount: Deactivated successfully. Sep 13 00:10:40.303070 kubelet[2572]: E0913 00:10:40.298690 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b44c8449-j7495_calico-system(cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b44c8449-j7495_calico-system(cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b44c8449-j7495" podUID="cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089" Sep 13 00:10:40.328266 containerd[1464]: time="2025-09-13T00:10:40.326099341Z" level=error msg="Failed to destroy network for sandbox \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.332848 containerd[1464]: time="2025-09-13T00:10:40.331815514Z" level=error msg="encountered an error cleaning up failed sandbox \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.332848 containerd[1464]: time="2025-09-13T00:10:40.331919303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lkdb,Uid:18c29eae-9c67-41af-9566-6e188885af9e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.332724 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c-shm.mount: Deactivated successfully. Sep 13 00:10:40.337578 kubelet[2572]: E0913 00:10:40.336130 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.337578 kubelet[2572]: E0913 00:10:40.336213 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lkdb" Sep 13 00:10:40.337578 kubelet[2572]: E0913 00:10:40.336291 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lkdb" Sep 13 00:10:40.337890 kubelet[2572]: E0913 00:10:40.336364 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7lkdb_kube-system(18c29eae-9c67-41af-9566-6e188885af9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7lkdb_kube-system(18c29eae-9c67-41af-9566-6e188885af9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7lkdb" podUID="18c29eae-9c67-41af-9566-6e188885af9e" Sep 13 00:10:40.359672 containerd[1464]: time="2025-09-13T00:10:40.358970223Z" level=error msg="Failed to destroy network for sandbox \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.366748 containerd[1464]: time="2025-09-13T00:10:40.363622642Z" level=error msg="Failed to destroy network for sandbox \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.367342 containerd[1464]: time="2025-09-13T00:10:40.366879817Z" level=error msg="encountered an error cleaning up failed sandbox \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.367342 containerd[1464]: time="2025-09-13T00:10:40.366960826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b67c64b5-zwfwq,Uid:1b28393a-bc83-414b-b383-70eb716d64ee,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.367509 kubelet[2572]: E0913 00:10:40.367446 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.367592 kubelet[2572]: E0913 00:10:40.367527 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77b67c64b5-zwfwq" Sep 13 00:10:40.367592 kubelet[2572]: E0913 00:10:40.367563 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77b67c64b5-zwfwq" Sep 13 00:10:40.367702 kubelet[2572]: E0913 00:10:40.367625 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77b67c64b5-zwfwq_calico-apiserver(1b28393a-bc83-414b-b383-70eb716d64ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77b67c64b5-zwfwq_calico-apiserver(1b28393a-bc83-414b-b383-70eb716d64ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77b67c64b5-zwfwq" podUID="1b28393a-bc83-414b-b383-70eb716d64ee" Sep 13 00:10:40.368322 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f-shm.mount: Deactivated successfully. Sep 13 00:10:40.372223 containerd[1464]: time="2025-09-13T00:10:40.372170938Z" level=error msg="encountered an error cleaning up failed sandbox \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.374447 containerd[1464]: time="2025-09-13T00:10:40.374135436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6sxxd,Uid:6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.374844 kubelet[2572]: E0913 00:10:40.374802 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.375048 kubelet[2572]: E0913 00:10:40.375011 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6sxxd" Sep 13 00:10:40.375185 kubelet[2572]: E0913 00:10:40.375161 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6sxxd" Sep 13 00:10:40.375504 kubelet[2572]: E0913 00:10:40.375402 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6sxxd_kube-system(6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6sxxd_kube-system(6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6sxxd" podUID="6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f" Sep 13 00:10:40.383060 containerd[1464]: time="2025-09-13T00:10:40.382981933Z" level=error msg="Failed to destroy network for sandbox \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.383728 containerd[1464]: time="2025-09-13T00:10:40.383588583Z" level=error msg="encountered an error cleaning up failed sandbox \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.384623 containerd[1464]: time="2025-09-13T00:10:40.384413816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df6d656dc-wgt8k,Uid:87c611c9-e5c6-4577-8012-22c2c46a1517,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.385973 kubelet[2572]: E0913 00:10:40.385565 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.385973 kubelet[2572]: E0913 00:10:40.385636 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5df6d656dc-wgt8k" Sep 13 00:10:40.385973 kubelet[2572]: E0913 00:10:40.385675 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5df6d656dc-wgt8k" Sep 13 00:10:40.386203 kubelet[2572]: E0913 00:10:40.385731 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5df6d656dc-wgt8k_calico-system(87c611c9-e5c6-4577-8012-22c2c46a1517)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5df6d656dc-wgt8k_calico-system(87c611c9-e5c6-4577-8012-22c2c46a1517)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5df6d656dc-wgt8k" podUID="87c611c9-e5c6-4577-8012-22c2c46a1517" Sep 13 00:10:40.402931 containerd[1464]: time="2025-09-13T00:10:40.402845888Z" level=error msg="Failed to destroy network for sandbox \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.403833 containerd[1464]: time="2025-09-13T00:10:40.403753783Z" level=error msg="encountered an error cleaning up failed sandbox \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.404748 containerd[1464]: time="2025-09-13T00:10:40.403946378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858b45798c-lxwx2,Uid:4bc4ce4d-5c95-4963-89e0-e92aa88cde6e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.405473 kubelet[2572]: E0913 00:10:40.405004 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.405473 kubelet[2572]: E0913 00:10:40.405428 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-858b45798c-lxwx2" Sep 13 00:10:40.406448 kubelet[2572]: E0913 00:10:40.405468 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-858b45798c-lxwx2" Sep 13 00:10:40.406448 kubelet[2572]: E0913 00:10:40.405561 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-858b45798c-lxwx2_calico-apiserver(4bc4ce4d-5c95-4963-89e0-e92aa88cde6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-858b45798c-lxwx2_calico-apiserver(4bc4ce4d-5c95-4963-89e0-e92aa88cde6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-858b45798c-lxwx2" podUID="4bc4ce4d-5c95-4963-89e0-e92aa88cde6e" Sep 13 00:10:40.425867 containerd[1464]: time="2025-09-13T00:10:40.425776579Z" level=error msg="Failed to destroy network for sandbox \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.426924 containerd[1464]: time="2025-09-13T00:10:40.426774327Z" level=error msg="encountered an error cleaning up failed sandbox \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.426924 containerd[1464]: time="2025-09-13T00:10:40.426885600Z" level=error msg="Failed to destroy network for sandbox \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.427947 containerd[1464]: time="2025-09-13T00:10:40.426890482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xlg2k,Uid:7e85b06c-dd91-46b3-8259-324d7b213ab5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.427947 containerd[1464]: time="2025-09-13T00:10:40.427333092Z" level=error msg="encountered an error cleaning up failed sandbox \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.427947 containerd[1464]: time="2025-09-13T00:10:40.427385963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858b45798c-wf79q,Uid:a122eeee-1c48-4d4d-8b11-6b176f51757c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.428339 kubelet[2572]: E0913 00:10:40.427528 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.428339 kubelet[2572]: E0913 00:10:40.427590 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-xlg2k" Sep 13 00:10:40.428339 kubelet[2572]: E0913 00:10:40.427620 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-xlg2k" Sep 13 00:10:40.428545 kubelet[2572]: E0913 00:10:40.427669 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-xlg2k_calico-system(7e85b06c-dd91-46b3-8259-324d7b213ab5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-xlg2k_calico-system(7e85b06c-dd91-46b3-8259-324d7b213ab5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-xlg2k" podUID="7e85b06c-dd91-46b3-8259-324d7b213ab5" Sep 13 00:10:40.429461 kubelet[2572]: E0913 00:10:40.429046 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.429461 kubelet[2572]: E0913 00:10:40.429135 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-858b45798c-wf79q" Sep 13 00:10:40.429461 kubelet[2572]: E0913 00:10:40.429193 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-858b45798c-wf79q" Sep 13 00:10:40.429728 kubelet[2572]: E0913 00:10:40.429372 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-858b45798c-wf79q_calico-apiserver(a122eeee-1c48-4d4d-8b11-6b176f51757c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-858b45798c-wf79q_calico-apiserver(a122eeee-1c48-4d4d-8b11-6b176f51757c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-858b45798c-wf79q" podUID="a122eeee-1c48-4d4d-8b11-6b176f51757c" Sep 13 00:10:40.512715 kubelet[2572]: I0913 00:10:40.512653 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Sep 13 00:10:40.513560 containerd[1464]: time="2025-09-13T00:10:40.513471863Z" level=info msg="StopPodSandbox for \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\"" Sep 13 00:10:40.514467 containerd[1464]: time="2025-09-13T00:10:40.514001693Z" level=info msg="Ensure that sandbox a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f in task-service has been cleanup successfully" Sep 13 00:10:40.517308 kubelet[2572]: I0913 00:10:40.515967 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Sep 13 00:10:40.518293 containerd[1464]: time="2025-09-13T00:10:40.518248231Z" level=info msg="StopPodSandbox for \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\"" Sep 13 00:10:40.519000 containerd[1464]: time="2025-09-13T00:10:40.518965780Z" level=info msg="Ensure that sandbox 700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491 in task-service has been cleanup successfully" Sep 13 00:10:40.524916 kubelet[2572]: I0913 00:10:40.523920 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Sep 13 00:10:40.528505 containerd[1464]: time="2025-09-13T00:10:40.528122324Z" level=info msg="StopPodSandbox for \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\"" Sep 13 00:10:40.531135 kubelet[2572]: I0913 00:10:40.530998 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Sep 13 00:10:40.532401 containerd[1464]: time="2025-09-13T00:10:40.532361588Z" level=info msg="Ensure that sandbox 05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd in task-service has been cleanup successfully" Sep 13 00:10:40.534438 containerd[1464]: time="2025-09-13T00:10:40.534374925Z" level=info msg="StopPodSandbox for \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\"" Sep 13 00:10:40.535834 containerd[1464]: time="2025-09-13T00:10:40.535426281Z" level=info msg="Ensure that sandbox e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41 in task-service has been cleanup successfully" Sep 13 00:10:40.555185 kubelet[2572]: I0913 00:10:40.555151 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Sep 13 00:10:40.556386 containerd[1464]: time="2025-09-13T00:10:40.555781668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:10:40.560841 containerd[1464]: time="2025-09-13T00:10:40.560701847Z" level=info msg="StopPodSandbox for \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\"" Sep 13 00:10:40.561086 containerd[1464]: time="2025-09-13T00:10:40.560973291Z" level=info msg="Ensure that sandbox 0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf in task-service has been cleanup successfully" Sep 13 00:10:40.586128 kubelet[2572]: I0913 00:10:40.585591 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Sep 13 00:10:40.588024 containerd[1464]: time="2025-09-13T00:10:40.587980365Z" level=info msg="StopPodSandbox for \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\"" Sep 13 00:10:40.590571 containerd[1464]: time="2025-09-13T00:10:40.589850974Z" level=info msg="Ensure that sandbox 175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505 in task-service has been cleanup successfully" Sep 13 00:10:40.600198 kubelet[2572]: I0913 00:10:40.598627 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Sep 13 00:10:40.600373 containerd[1464]: time="2025-09-13T00:10:40.599646422Z" level=info msg="StopPodSandbox for \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\"" Sep 13 00:10:40.601582 containerd[1464]: time="2025-09-13T00:10:40.601316199Z" level=info msg="Ensure that sandbox 0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f in task-service has been cleanup successfully" Sep 13 00:10:40.606605 kubelet[2572]: I0913 00:10:40.606554 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Sep 13 00:10:40.616802 containerd[1464]: time="2025-09-13T00:10:40.616721048Z" level=info msg="StopPodSandbox for \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\"" Sep 13 00:10:40.620031 containerd[1464]: time="2025-09-13T00:10:40.619956531Z" level=info msg="Ensure that sandbox c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac in task-service has been cleanup successfully" Sep 13 00:10:40.632143 kubelet[2572]: I0913 00:10:40.632103 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Sep 13 00:10:40.634770 containerd[1464]: time="2025-09-13T00:10:40.633967012Z" level=info msg="StopPodSandbox for \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\"" Sep 13 00:10:40.634770 containerd[1464]: time="2025-09-13T00:10:40.634202963Z" level=info msg="Ensure that sandbox 333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c in task-service has been cleanup successfully" Sep 13 00:10:40.731822 containerd[1464]: time="2025-09-13T00:10:40.731742389Z" level=error msg="StopPodSandbox for \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\" failed" error="failed to destroy network for sandbox \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.732374 kubelet[2572]: E0913 00:10:40.732321 2572 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Sep 13 00:10:40.732669 kubelet[2572]: E0913 00:10:40.732606 2572 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491"} Sep 13 00:10:40.732826 kubelet[2572]: E0913 00:10:40.732802 2572 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4bc4ce4d-5c95-4963-89e0-e92aa88cde6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:40.733069 kubelet[2572]: E0913 00:10:40.732999 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4bc4ce4d-5c95-4963-89e0-e92aa88cde6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-858b45798c-lxwx2" podUID="4bc4ce4d-5c95-4963-89e0-e92aa88cde6e" Sep 13 00:10:40.785771 containerd[1464]: time="2025-09-13T00:10:40.784193983Z" level=error msg="StopPodSandbox for \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\" failed" error="failed to destroy network for sandbox \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.785986 kubelet[2572]: E0913 00:10:40.784684 2572 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Sep 13 00:10:40.785986 kubelet[2572]: E0913 00:10:40.784752 2572 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f"} Sep 13 00:10:40.785986 kubelet[2572]: E0913 00:10:40.784813 2572 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b28393a-bc83-414b-b383-70eb716d64ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:40.785986 kubelet[2572]: E0913 00:10:40.784852 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b28393a-bc83-414b-b383-70eb716d64ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77b67c64b5-zwfwq" podUID="1b28393a-bc83-414b-b383-70eb716d64ee" Sep 13 00:10:40.800183 containerd[1464]: time="2025-09-13T00:10:40.800107818Z" level=error msg="StopPodSandbox for \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\" failed" error="failed to destroy network for sandbox \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.800699 kubelet[2572]: E0913 00:10:40.800435 2572 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Sep 13 00:10:40.800699 kubelet[2572]: E0913 00:10:40.800509 2572 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd"} Sep 13 00:10:40.800699 kubelet[2572]: E0913 00:10:40.800563 2572 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:40.800699 kubelet[2572]: E0913 00:10:40.800609 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b44c8449-j7495" podUID="cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089" Sep 13 00:10:40.815084 containerd[1464]: time="2025-09-13T00:10:40.814920285Z" level=error msg="StopPodSandbox for \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\" failed" error="failed to destroy network for sandbox \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.815587 kubelet[2572]: E0913 00:10:40.815472 2572 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Sep 13 00:10:40.815587 kubelet[2572]: E0913 00:10:40.815548 2572 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41"} Sep 13 00:10:40.815724 kubelet[2572]: E0913 00:10:40.815603 2572 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dbe2014e-59d9-4e26-9bc7-323114f09c1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:40.815724 kubelet[2572]: E0913 00:10:40.815639 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dbe2014e-59d9-4e26-9bc7-323114f09c1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-st7d7" podUID="dbe2014e-59d9-4e26-9bc7-323114f09c1f" Sep 13 00:10:40.825516 containerd[1464]: time="2025-09-13T00:10:40.825437716Z" level=error msg="StopPodSandbox for \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\" failed" error="failed to destroy network for sandbox \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.826392 kubelet[2572]: E0913 00:10:40.826125 2572 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Sep 13 00:10:40.826392 kubelet[2572]: E0913 00:10:40.826200 2572 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf"} Sep 13 00:10:40.826392 kubelet[2572]: E0913 00:10:40.826274 2572 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e85b06c-dd91-46b3-8259-324d7b213ab5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:40.826392 kubelet[2572]: E0913 00:10:40.826345 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e85b06c-dd91-46b3-8259-324d7b213ab5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-xlg2k" podUID="7e85b06c-dd91-46b3-8259-324d7b213ab5" Sep 13 00:10:40.836534 containerd[1464]: time="2025-09-13T00:10:40.836457450Z" level=error msg="StopPodSandbox for \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\" failed" error="failed to destroy network for sandbox \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.836996 kubelet[2572]: E0913 00:10:40.836776 2572 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Sep 13 00:10:40.836996 kubelet[2572]: E0913 00:10:40.836861 2572 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505"} Sep 13 00:10:40.836996 kubelet[2572]: E0913 00:10:40.836911 2572 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:40.836996 kubelet[2572]: E0913 00:10:40.836952 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6sxxd" podUID="6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f" Sep 13 00:10:40.844682 containerd[1464]: time="2025-09-13T00:10:40.844601669Z" level=error msg="StopPodSandbox for \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\" failed" error="failed to destroy network for sandbox \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.845349 kubelet[2572]: E0913 00:10:40.845210 2572 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Sep 13 00:10:40.845685 kubelet[2572]: E0913 00:10:40.845651 2572 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac"} Sep 13 00:10:40.845985 kubelet[2572]: E0913 00:10:40.845888 2572 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"87c611c9-e5c6-4577-8012-22c2c46a1517\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:40.845985 kubelet[2572]: E0913 00:10:40.845937 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"87c611c9-e5c6-4577-8012-22c2c46a1517\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5df6d656dc-wgt8k" podUID="87c611c9-e5c6-4577-8012-22c2c46a1517" Sep 13 00:10:40.848437 containerd[1464]: time="2025-09-13T00:10:40.848377227Z" level=error msg="StopPodSandbox for \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\" failed" error="failed to destroy network for sandbox \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.848909 kubelet[2572]: E0913 00:10:40.848824 2572 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Sep 13 00:10:40.848909 kubelet[2572]: E0913 00:10:40.848885 2572 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f"} Sep 13 00:10:40.849078 kubelet[2572]: E0913 00:10:40.848932 2572 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a122eeee-1c48-4d4d-8b11-6b176f51757c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:40.849078 kubelet[2572]: E0913 00:10:40.848966 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a122eeee-1c48-4d4d-8b11-6b176f51757c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-858b45798c-wf79q" podUID="a122eeee-1c48-4d4d-8b11-6b176f51757c" Sep 13 00:10:40.852131 containerd[1464]: time="2025-09-13T00:10:40.852080446Z" level=error msg="StopPodSandbox for \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\" failed" error="failed to destroy network for sandbox \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:40.852597 kubelet[2572]: E0913 00:10:40.852549 2572 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Sep 13 00:10:40.852725 kubelet[2572]: E0913 00:10:40.852610 2572 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c"} Sep 13 00:10:40.852725 kubelet[2572]: E0913 00:10:40.852659 2572 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"18c29eae-9c67-41af-9566-6e188885af9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:40.852725 kubelet[2572]: E0913 00:10:40.852695 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"18c29eae-9c67-41af-9566-6e188885af9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7lkdb" podUID="18c29eae-9c67-41af-9566-6e188885af9e" Sep 13 00:10:41.131587 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f-shm.mount: Deactivated successfully. Sep 13 00:10:41.131766 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491-shm.mount: Deactivated successfully. Sep 13 00:10:41.131873 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac-shm.mount: Deactivated successfully. Sep 13 00:10:41.131977 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf-shm.mount: Deactivated successfully. Sep 13 00:10:41.132082 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505-shm.mount: Deactivated successfully. Sep 13 00:10:48.028644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1036702191.mount: Deactivated successfully. Sep 13 00:10:48.063180 containerd[1464]: time="2025-09-13T00:10:48.063101463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:48.064731 containerd[1464]: time="2025-09-13T00:10:48.064479404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:10:48.067721 containerd[1464]: time="2025-09-13T00:10:48.066222451Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:48.069264 containerd[1464]: time="2025-09-13T00:10:48.069123718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:48.070517 containerd[1464]: time="2025-09-13T00:10:48.069945788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.513741807s" Sep 13 00:10:48.070517 containerd[1464]: time="2025-09-13T00:10:48.069998882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:10:48.095381 containerd[1464]: time="2025-09-13T00:10:48.095315442Z" level=info msg="CreateContainer within sandbox \"d4161a9de7ce764fa27a90a1be993d0f7a73d23350bfd86b6cb18db38211a984\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:10:48.123789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3362590942.mount: Deactivated successfully. Sep 13 00:10:48.127137 containerd[1464]: time="2025-09-13T00:10:48.127020026Z" level=info msg="CreateContainer within sandbox \"d4161a9de7ce764fa27a90a1be993d0f7a73d23350bfd86b6cb18db38211a984\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"68f3b8c0d34eb1acd8e6289a64dc325432705921ce792a1b0715228cfc043014\"" Sep 13 00:10:48.129636 containerd[1464]: time="2025-09-13T00:10:48.128211544Z" level=info msg="StartContainer for \"68f3b8c0d34eb1acd8e6289a64dc325432705921ce792a1b0715228cfc043014\"" Sep 13 00:10:48.169478 systemd[1]: Started cri-containerd-68f3b8c0d34eb1acd8e6289a64dc325432705921ce792a1b0715228cfc043014.scope - libcontainer container 68f3b8c0d34eb1acd8e6289a64dc325432705921ce792a1b0715228cfc043014. Sep 13 00:10:48.217621 containerd[1464]: time="2025-09-13T00:10:48.217557932Z" level=info msg="StartContainer for \"68f3b8c0d34eb1acd8e6289a64dc325432705921ce792a1b0715228cfc043014\" returns successfully" Sep 13 00:10:48.369322 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:10:48.369507 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:10:48.519485 containerd[1464]: time="2025-09-13T00:10:48.519428133Z" level=info msg="StopPodSandbox for \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\"" Sep 13 00:10:48.711495 containerd[1464]: 2025-09-13 00:10:48.614 [INFO][3854] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Sep 13 00:10:48.711495 containerd[1464]: 2025-09-13 00:10:48.616 [INFO][3854] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" iface="eth0" netns="/var/run/netns/cni-f1ae543d-1b5f-77b1-9091-cee6832c5cff" Sep 13 00:10:48.711495 containerd[1464]: 2025-09-13 00:10:48.616 [INFO][3854] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" iface="eth0" netns="/var/run/netns/cni-f1ae543d-1b5f-77b1-9091-cee6832c5cff" Sep 13 00:10:48.711495 containerd[1464]: 2025-09-13 00:10:48.617 [INFO][3854] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" iface="eth0" netns="/var/run/netns/cni-f1ae543d-1b5f-77b1-9091-cee6832c5cff" Sep 13 00:10:48.711495 containerd[1464]: 2025-09-13 00:10:48.617 [INFO][3854] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Sep 13 00:10:48.711495 containerd[1464]: 2025-09-13 00:10:48.617 [INFO][3854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Sep 13 00:10:48.711495 containerd[1464]: 2025-09-13 00:10:48.674 [INFO][3862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" HandleID="k8s-pod-network.c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--5df6d656dc--wgt8k-eth0" Sep 13 00:10:48.711495 containerd[1464]: 2025-09-13 00:10:48.677 [INFO][3862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:48.711495 containerd[1464]: 2025-09-13 00:10:48.678 [INFO][3862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:48.711495 containerd[1464]: 2025-09-13 00:10:48.698 [WARNING][3862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" HandleID="k8s-pod-network.c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--5df6d656dc--wgt8k-eth0" Sep 13 00:10:48.711495 containerd[1464]: 2025-09-13 00:10:48.699 [INFO][3862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" HandleID="k8s-pod-network.c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--5df6d656dc--wgt8k-eth0" Sep 13 00:10:48.711495 containerd[1464]: 2025-09-13 00:10:48.702 [INFO][3862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:48.711495 containerd[1464]: 2025-09-13 00:10:48.708 [INFO][3854] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Sep 13 00:10:48.712215 containerd[1464]: time="2025-09-13T00:10:48.711763973Z" level=info msg="TearDown network for sandbox \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\" successfully" Sep 13 00:10:48.712215 containerd[1464]: time="2025-09-13T00:10:48.711806290Z" level=info msg="StopPodSandbox for \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\" returns successfully" Sep 13 00:10:48.881506 kubelet[2572]: I0913 00:10:48.881370 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87c611c9-e5c6-4577-8012-22c2c46a1517-whisker-ca-bundle\") pod \"87c611c9-e5c6-4577-8012-22c2c46a1517\" (UID: \"87c611c9-e5c6-4577-8012-22c2c46a1517\") " Sep 13 00:10:48.881506 kubelet[2572]: I0913 00:10:48.881476 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7vkd\" (UniqueName: \"kubernetes.io/projected/87c611c9-e5c6-4577-8012-22c2c46a1517-kube-api-access-n7vkd\") pod \"87c611c9-e5c6-4577-8012-22c2c46a1517\" (UID: \"87c611c9-e5c6-4577-8012-22c2c46a1517\") " Sep 13 00:10:48.882134 kubelet[2572]: I0913 00:10:48.881551 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87c611c9-e5c6-4577-8012-22c2c46a1517-whisker-backend-key-pair\") pod \"87c611c9-e5c6-4577-8012-22c2c46a1517\" (UID: \"87c611c9-e5c6-4577-8012-22c2c46a1517\") " Sep 13 00:10:48.885294 kubelet[2572]: I0913 00:10:48.884337 2572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87c611c9-e5c6-4577-8012-22c2c46a1517-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "87c611c9-e5c6-4577-8012-22c2c46a1517" (UID: "87c611c9-e5c6-4577-8012-22c2c46a1517"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:10:48.889184 kubelet[2572]: I0913 00:10:48.889129 2572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c611c9-e5c6-4577-8012-22c2c46a1517-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "87c611c9-e5c6-4577-8012-22c2c46a1517" (UID: "87c611c9-e5c6-4577-8012-22c2c46a1517"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:10:48.891481 kubelet[2572]: I0913 00:10:48.891435 2572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87c611c9-e5c6-4577-8012-22c2c46a1517-kube-api-access-n7vkd" (OuterVolumeSpecName: "kube-api-access-n7vkd") pod "87c611c9-e5c6-4577-8012-22c2c46a1517" (UID: "87c611c9-e5c6-4577-8012-22c2c46a1517"). InnerVolumeSpecName "kube-api-access-n7vkd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:10:48.982189 kubelet[2572]: I0913 00:10:48.982128 2572 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87c611c9-e5c6-4577-8012-22c2c46a1517-whisker-ca-bundle\") on node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" DevicePath \"\"" Sep 13 00:10:48.982189 kubelet[2572]: I0913 00:10:48.982184 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n7vkd\" (UniqueName: \"kubernetes.io/projected/87c611c9-e5c6-4577-8012-22c2c46a1517-kube-api-access-n7vkd\") on node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" DevicePath \"\"" Sep 13 00:10:48.982189 kubelet[2572]: I0913 00:10:48.982202 2572 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87c611c9-e5c6-4577-8012-22c2c46a1517-whisker-backend-key-pair\") on node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" DevicePath \"\"" Sep 13 00:10:49.029410 systemd[1]: run-netns-cni\x2df1ae543d\x2d1b5f\x2d77b1\x2d9091\x2dcee6832c5cff.mount: Deactivated successfully. Sep 13 00:10:49.029551 systemd[1]: var-lib-kubelet-pods-87c611c9\x2de5c6\x2d4577\x2d8012\x2d22c2c46a1517-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn7vkd.mount: Deactivated successfully. Sep 13 00:10:49.029686 systemd[1]: var-lib-kubelet-pods-87c611c9\x2de5c6\x2d4577\x2d8012\x2d22c2c46a1517-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:10:49.687178 systemd[1]: Removed slice kubepods-besteffort-pod87c611c9_e5c6_4577_8012_22c2c46a1517.slice - libcontainer container kubepods-besteffort-pod87c611c9_e5c6_4577_8012_22c2c46a1517.slice. Sep 13 00:10:49.704546 kubelet[2572]: I0913 00:10:49.703593 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wgqsq" podStartSLOduration=2.315369458 podStartE2EDuration="20.703558874s" podCreationTimestamp="2025-09-13 00:10:29 +0000 UTC" firstStartedPulling="2025-09-13 00:10:29.683325778 +0000 UTC m=+21.557342730" lastFinishedPulling="2025-09-13 00:10:48.071515193 +0000 UTC m=+39.945532146" observedRunningTime="2025-09-13 00:10:48.71810506 +0000 UTC m=+40.592122010" watchObservedRunningTime="2025-09-13 00:10:49.703558874 +0000 UTC m=+41.577575847" Sep 13 00:10:49.767151 systemd[1]: Created slice kubepods-besteffort-pod59a6e668_cf61_4a93_82f0_59937b8bb8a7.slice - libcontainer container kubepods-besteffort-pod59a6e668_cf61_4a93_82f0_59937b8bb8a7.slice. Sep 13 00:10:49.890773 kubelet[2572]: I0913 00:10:49.890697 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59a6e668-cf61-4a93-82f0-59937b8bb8a7-whisker-ca-bundle\") pod \"whisker-78f74f95fd-p4ctr\" (UID: \"59a6e668-cf61-4a93-82f0-59937b8bb8a7\") " pod="calico-system/whisker-78f74f95fd-p4ctr" Sep 13 00:10:49.890773 kubelet[2572]: I0913 00:10:49.890778 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/59a6e668-cf61-4a93-82f0-59937b8bb8a7-whisker-backend-key-pair\") pod \"whisker-78f74f95fd-p4ctr\" (UID: \"59a6e668-cf61-4a93-82f0-59937b8bb8a7\") " pod="calico-system/whisker-78f74f95fd-p4ctr" Sep 13 00:10:49.891416 kubelet[2572]: I0913 00:10:49.890814 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8crg\" (UniqueName: \"kubernetes.io/projected/59a6e668-cf61-4a93-82f0-59937b8bb8a7-kube-api-access-m8crg\") pod \"whisker-78f74f95fd-p4ctr\" (UID: \"59a6e668-cf61-4a93-82f0-59937b8bb8a7\") " pod="calico-system/whisker-78f74f95fd-p4ctr" Sep 13 00:10:50.074403 containerd[1464]: time="2025-09-13T00:10:50.073808107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78f74f95fd-p4ctr,Uid:59a6e668-cf61-4a93-82f0-59937b8bb8a7,Namespace:calico-system,Attempt:0,}" Sep 13 00:10:50.323400 systemd-networkd[1370]: calia030d6f2f61: Link UP Sep 13 00:10:50.327057 systemd-networkd[1370]: calia030d6f2f61: Gained carrier Sep 13 00:10:50.350639 kubelet[2572]: I0913 00:10:50.350572 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87c611c9-e5c6-4577-8012-22c2c46a1517" path="/var/lib/kubelet/pods/87c611c9-e5c6-4577-8012-22c2c46a1517/volumes" Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.161 [INFO][3960] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.178 [INFO][3960] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--78f74f95fd--p4ctr-eth0 whisker-78f74f95fd- calico-system 59a6e668-cf61-4a93-82f0-59937b8bb8a7 936 0 2025-09-13 00:10:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78f74f95fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c whisker-78f74f95fd-p4ctr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia030d6f2f61 [] [] }} ContainerID="4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" Namespace="calico-system" Pod="whisker-78f74f95fd-p4ctr" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--78f74f95fd--p4ctr-" Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.179 [INFO][3960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" Namespace="calico-system" Pod="whisker-78f74f95fd-p4ctr" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--78f74f95fd--p4ctr-eth0" Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.236 [INFO][3980] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" HandleID="k8s-pod-network.4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--78f74f95fd--p4ctr-eth0" Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.237 [INFO][3980] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" HandleID="k8s-pod-network.4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--78f74f95fd--p4ctr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", "pod":"whisker-78f74f95fd-p4ctr", "timestamp":"2025-09-13 00:10:50.236855599 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.237 [INFO][3980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.237 [INFO][3980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.237 [INFO][3980] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c' Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.248 [INFO][3980] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.255 [INFO][3980] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.263 [INFO][3980] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.267 [INFO][3980] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.271 [INFO][3980] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.271 [INFO][3980] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.276 [INFO][3980] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2 Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.290 [INFO][3980] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.298 [INFO][3980] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.65/26] block=192.168.61.64/26 handle="k8s-pod-network.4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.298 [INFO][3980] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.65/26] handle="k8s-pod-network.4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.298 [INFO][3980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:50.370547 containerd[1464]: 2025-09-13 00:10:50.298 [INFO][3980] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.65/26] IPv6=[] ContainerID="4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" HandleID="k8s-pod-network.4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--78f74f95fd--p4ctr-eth0" Sep 13 00:10:50.373907 containerd[1464]: 2025-09-13 00:10:50.302 [INFO][3960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" Namespace="calico-system" Pod="whisker-78f74f95fd-p4ctr" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--78f74f95fd--p4ctr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--78f74f95fd--p4ctr-eth0", GenerateName:"whisker-78f74f95fd-", Namespace:"calico-system", SelfLink:"", UID:"59a6e668-cf61-4a93-82f0-59937b8bb8a7", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78f74f95fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"", Pod:"whisker-78f74f95fd-p4ctr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.61.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia030d6f2f61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:50.373907 containerd[1464]: 2025-09-13 00:10:50.302 [INFO][3960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.65/32] ContainerID="4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" Namespace="calico-system" Pod="whisker-78f74f95fd-p4ctr" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--78f74f95fd--p4ctr-eth0" Sep 13 00:10:50.373907 containerd[1464]: 2025-09-13 00:10:50.302 [INFO][3960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia030d6f2f61 ContainerID="4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" Namespace="calico-system" Pod="whisker-78f74f95fd-p4ctr" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--78f74f95fd--p4ctr-eth0" Sep 13 00:10:50.373907 containerd[1464]: 2025-09-13 00:10:50.327 [INFO][3960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" Namespace="calico-system" Pod="whisker-78f74f95fd-p4ctr" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--78f74f95fd--p4ctr-eth0" Sep 13 00:10:50.373907 containerd[1464]: 2025-09-13 00:10:50.334 [INFO][3960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" Namespace="calico-system" Pod="whisker-78f74f95fd-p4ctr" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--78f74f95fd--p4ctr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--78f74f95fd--p4ctr-eth0", GenerateName:"whisker-78f74f95fd-", Namespace:"calico-system", SelfLink:"", UID:"59a6e668-cf61-4a93-82f0-59937b8bb8a7", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78f74f95fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2", Pod:"whisker-78f74f95fd-p4ctr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.61.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia030d6f2f61", MAC:"12:20:de:ee:de:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:50.373907 containerd[1464]: 2025-09-13 00:10:50.363 [INFO][3960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2" Namespace="calico-system" Pod="whisker-78f74f95fd-p4ctr" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--78f74f95fd--p4ctr-eth0" Sep 13 00:10:50.444165 containerd[1464]: time="2025-09-13T00:10:50.443961508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:50.444165 containerd[1464]: time="2025-09-13T00:10:50.444086470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:50.444165 containerd[1464]: time="2025-09-13T00:10:50.444114611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:50.449897 containerd[1464]: time="2025-09-13T00:10:50.446728255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:50.508992 systemd[1]: Started cri-containerd-4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2.scope - libcontainer container 4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2. Sep 13 00:10:50.625376 containerd[1464]: time="2025-09-13T00:10:50.623887991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78f74f95fd-p4ctr,Uid:59a6e668-cf61-4a93-82f0-59937b8bb8a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2\"" Sep 13 00:10:50.630892 containerd[1464]: time="2025-09-13T00:10:50.629783734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:10:50.847384 kernel: bpftool[4073]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:10:51.155678 systemd-networkd[1370]: vxlan.calico: Link UP Sep 13 00:10:51.155695 systemd-networkd[1370]: vxlan.calico: Gained carrier Sep 13 00:10:51.331764 containerd[1464]: time="2025-09-13T00:10:51.329915929Z" level=info msg="StopPodSandbox for \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\"" Sep 13 00:10:51.499437 containerd[1464]: 2025-09-13 00:10:51.419 [INFO][4118] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Sep 13 00:10:51.499437 containerd[1464]: 2025-09-13 00:10:51.420 [INFO][4118] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" iface="eth0" netns="/var/run/netns/cni-64a85c39-4ccc-e58b-2de6-8251d48e2d28" Sep 13 00:10:51.499437 containerd[1464]: 2025-09-13 00:10:51.421 [INFO][4118] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" iface="eth0" netns="/var/run/netns/cni-64a85c39-4ccc-e58b-2de6-8251d48e2d28" Sep 13 00:10:51.499437 containerd[1464]: 2025-09-13 00:10:51.421 [INFO][4118] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" iface="eth0" netns="/var/run/netns/cni-64a85c39-4ccc-e58b-2de6-8251d48e2d28" Sep 13 00:10:51.499437 containerd[1464]: 2025-09-13 00:10:51.421 [INFO][4118] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Sep 13 00:10:51.499437 containerd[1464]: 2025-09-13 00:10:51.421 [INFO][4118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Sep 13 00:10:51.499437 containerd[1464]: 2025-09-13 00:10:51.475 [INFO][4126] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" HandleID="k8s-pod-network.333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:10:51.499437 containerd[1464]: 2025-09-13 00:10:51.475 [INFO][4126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:51.499437 containerd[1464]: 2025-09-13 00:10:51.475 [INFO][4126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:51.499437 containerd[1464]: 2025-09-13 00:10:51.488 [WARNING][4126] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" HandleID="k8s-pod-network.333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:10:51.499437 containerd[1464]: 2025-09-13 00:10:51.488 [INFO][4126] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" HandleID="k8s-pod-network.333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:10:51.499437 containerd[1464]: 2025-09-13 00:10:51.492 [INFO][4126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:51.499437 containerd[1464]: 2025-09-13 00:10:51.496 [INFO][4118] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Sep 13 00:10:51.502371 containerd[1464]: time="2025-09-13T00:10:51.499625069Z" level=info msg="TearDown network for sandbox \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\" successfully" Sep 13 00:10:51.502371 containerd[1464]: time="2025-09-13T00:10:51.499661275Z" level=info msg="StopPodSandbox for \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\" returns successfully" Sep 13 00:10:51.502371 containerd[1464]: time="2025-09-13T00:10:51.501616925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lkdb,Uid:18c29eae-9c67-41af-9566-6e188885af9e,Namespace:kube-system,Attempt:1,}" Sep 13 00:10:51.515371 systemd[1]: run-netns-cni\x2d64a85c39\x2d4ccc\x2de58b\x2d2de6\x2d8251d48e2d28.mount: Deactivated successfully. Sep 13 00:10:51.739322 systemd-networkd[1370]: calia030d6f2f61: Gained IPv6LL Sep 13 00:10:51.940448 systemd-networkd[1370]: cali5543cbc496a: Link UP Sep 13 00:10:51.940793 systemd-networkd[1370]: cali5543cbc496a: Gained carrier Sep 13 00:10:51.963540 containerd[1464]: time="2025-09-13T00:10:51.963180959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:51.967811 containerd[1464]: time="2025-09-13T00:10:51.967689765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:10:51.969916 containerd[1464]: time="2025-09-13T00:10:51.969862995Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:51.981961 containerd[1464]: time="2025-09-13T00:10:51.981802636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:51.987909 containerd[1464]: time="2025-09-13T00:10:51.987521219Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.356859662s" Sep 13 00:10:51.987909 containerd[1464]: time="2025-09-13T00:10:51.987583652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.721 [INFO][4140] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0 coredns-668d6bf9bc- kube-system 18c29eae-9c67-41af-9566-6e188885af9e 946 0 2025-09-13 00:10:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c coredns-668d6bf9bc-7lkdb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5543cbc496a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lkdb" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-" Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.722 [INFO][4140] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lkdb" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.857 [INFO][4177] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" HandleID="k8s-pod-network.3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.858 [INFO][4177] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" HandleID="k8s-pod-network.3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b0c10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", "pod":"coredns-668d6bf9bc-7lkdb", "timestamp":"2025-09-13 00:10:51.857246639 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.858 [INFO][4177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.858 [INFO][4177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.858 [INFO][4177] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c' Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.876 [INFO][4177] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.884 [INFO][4177] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.894 [INFO][4177] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.900 [INFO][4177] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.905 [INFO][4177] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.905 [INFO][4177] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.907 [INFO][4177] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.916 [INFO][4177] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.929 [INFO][4177] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.66/26] block=192.168.61.64/26 handle="k8s-pod-network.3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.929 [INFO][4177] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.66/26] handle="k8s-pod-network.3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.930 [INFO][4177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:51.992843 containerd[1464]: 2025-09-13 00:10:51.930 [INFO][4177] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.66/26] IPv6=[] ContainerID="3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" HandleID="k8s-pod-network.3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:10:51.996982 containerd[1464]: 2025-09-13 00:10:51.933 [INFO][4140] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lkdb" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"18c29eae-9c67-41af-9566-6e188885af9e", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"", Pod:"coredns-668d6bf9bc-7lkdb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5543cbc496a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:51.996982 containerd[1464]: 2025-09-13 00:10:51.934 [INFO][4140] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.66/32] ContainerID="3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lkdb" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:10:51.996982 containerd[1464]: 2025-09-13 00:10:51.934 [INFO][4140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5543cbc496a ContainerID="3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lkdb" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:10:51.996982 containerd[1464]: 2025-09-13 00:10:51.941 [INFO][4140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lkdb" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:10:51.996982 containerd[1464]: 2025-09-13 00:10:51.944 [INFO][4140] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lkdb" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"18c29eae-9c67-41af-9566-6e188885af9e", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f", Pod:"coredns-668d6bf9bc-7lkdb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5543cbc496a", MAC:"fa:4b:c6:03:35:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:51.996982 containerd[1464]: 2025-09-13 00:10:51.973 [INFO][4140] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lkdb" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:10:52.018128 containerd[1464]: time="2025-09-13T00:10:52.017009803Z" level=info msg="CreateContainer within sandbox \"4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:10:52.063810 containerd[1464]: time="2025-09-13T00:10:52.063112188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:52.063810 containerd[1464]: time="2025-09-13T00:10:52.063409041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:52.063810 containerd[1464]: time="2025-09-13T00:10:52.063433694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:52.063810 containerd[1464]: time="2025-09-13T00:10:52.063601514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:52.064871 containerd[1464]: time="2025-09-13T00:10:52.064817169Z" level=info msg="CreateContainer within sandbox \"4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3cbc6e537a3b743e84bd6c898ee02dced50fdf0c6d690e5817ceb38c0add20d2\"" Sep 13 00:10:52.070990 containerd[1464]: time="2025-09-13T00:10:52.070707518Z" level=info msg="StartContainer for \"3cbc6e537a3b743e84bd6c898ee02dced50fdf0c6d690e5817ceb38c0add20d2\"" Sep 13 00:10:52.114193 systemd[1]: Started cri-containerd-3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f.scope - libcontainer container 3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f. Sep 13 00:10:52.138572 systemd[1]: Started cri-containerd-3cbc6e537a3b743e84bd6c898ee02dced50fdf0c6d690e5817ceb38c0add20d2.scope - libcontainer container 3cbc6e537a3b743e84bd6c898ee02dced50fdf0c6d690e5817ceb38c0add20d2. Sep 13 00:10:52.203277 containerd[1464]: time="2025-09-13T00:10:52.202738607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lkdb,Uid:18c29eae-9c67-41af-9566-6e188885af9e,Namespace:kube-system,Attempt:1,} returns sandbox id \"3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f\"" Sep 13 00:10:52.213957 containerd[1464]: time="2025-09-13T00:10:52.213663466Z" level=info msg="CreateContainer within sandbox \"3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:10:52.239055 containerd[1464]: time="2025-09-13T00:10:52.238999057Z" level=info msg="CreateContainer within sandbox \"3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0bf725c1a37e99c680abae9c0343440f53b123b1db03bdcf98aecaa8bb6f73d\"" Sep 13 00:10:52.240412 containerd[1464]: time="2025-09-13T00:10:52.239575509Z" level=info msg="StartContainer for \"3cbc6e537a3b743e84bd6c898ee02dced50fdf0c6d690e5817ceb38c0add20d2\" returns successfully" Sep 13 00:10:52.241266 containerd[1464]: time="2025-09-13T00:10:52.240911940Z" level=info msg="StartContainer for \"c0bf725c1a37e99c680abae9c0343440f53b123b1db03bdcf98aecaa8bb6f73d\"" Sep 13 00:10:52.243098 containerd[1464]: time="2025-09-13T00:10:52.243056926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:10:52.249647 systemd-networkd[1370]: vxlan.calico: Gained IPv6LL Sep 13 00:10:52.301497 systemd[1]: Started cri-containerd-c0bf725c1a37e99c680abae9c0343440f53b123b1db03bdcf98aecaa8bb6f73d.scope - libcontainer container c0bf725c1a37e99c680abae9c0343440f53b123b1db03bdcf98aecaa8bb6f73d. Sep 13 00:10:52.335269 containerd[1464]: time="2025-09-13T00:10:52.334022201Z" level=info msg="StopPodSandbox for \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\"" Sep 13 00:10:52.335269 containerd[1464]: time="2025-09-13T00:10:52.334343853Z" level=info msg="StopPodSandbox for \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\"" Sep 13 00:10:52.359687 containerd[1464]: time="2025-09-13T00:10:52.359619910Z" level=info msg="StartContainer for \"c0bf725c1a37e99c680abae9c0343440f53b123b1db03bdcf98aecaa8bb6f73d\" returns successfully" Sep 13 00:10:52.570104 containerd[1464]: 2025-09-13 00:10:52.455 [INFO][4323] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Sep 13 00:10:52.570104 containerd[1464]: 2025-09-13 00:10:52.460 [INFO][4323] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" iface="eth0" netns="/var/run/netns/cni-5b92c7af-7f65-f264-6868-4fd9d4f808fa" Sep 13 00:10:52.570104 containerd[1464]: 2025-09-13 00:10:52.462 [INFO][4323] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" iface="eth0" netns="/var/run/netns/cni-5b92c7af-7f65-f264-6868-4fd9d4f808fa" Sep 13 00:10:52.570104 containerd[1464]: 2025-09-13 00:10:52.462 [INFO][4323] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" iface="eth0" netns="/var/run/netns/cni-5b92c7af-7f65-f264-6868-4fd9d4f808fa" Sep 13 00:10:52.570104 containerd[1464]: 2025-09-13 00:10:52.462 [INFO][4323] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Sep 13 00:10:52.570104 containerd[1464]: 2025-09-13 00:10:52.462 [INFO][4323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Sep 13 00:10:52.570104 containerd[1464]: 2025-09-13 00:10:52.540 [INFO][4345] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" HandleID="k8s-pod-network.05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:10:52.570104 containerd[1464]: 2025-09-13 00:10:52.540 [INFO][4345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:52.570104 containerd[1464]: 2025-09-13 00:10:52.541 [INFO][4345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:52.570104 containerd[1464]: 2025-09-13 00:10:52.559 [WARNING][4345] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" HandleID="k8s-pod-network.05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:10:52.570104 containerd[1464]: 2025-09-13 00:10:52.559 [INFO][4345] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" HandleID="k8s-pod-network.05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:10:52.570104 containerd[1464]: 2025-09-13 00:10:52.562 [INFO][4345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:52.570104 containerd[1464]: 2025-09-13 00:10:52.564 [INFO][4323] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Sep 13 00:10:52.570104 containerd[1464]: time="2025-09-13T00:10:52.567328695Z" level=info msg="TearDown network for sandbox \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\" successfully" Sep 13 00:10:52.570104 containerd[1464]: time="2025-09-13T00:10:52.567366661Z" level=info msg="StopPodSandbox for \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\" returns successfully" Sep 13 00:10:52.573976 containerd[1464]: time="2025-09-13T00:10:52.572580336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b44c8449-j7495,Uid:cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089,Namespace:calico-system,Attempt:1,}" Sep 13 00:10:52.576127 systemd[1]: run-netns-cni\x2d5b92c7af\x2d7f65\x2df264\x2d6868\x2d4fd9d4f808fa.mount: Deactivated successfully. Sep 13 00:10:52.652394 containerd[1464]: 2025-09-13 00:10:52.551 [INFO][4331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Sep 13 00:10:52.652394 containerd[1464]: 2025-09-13 00:10:52.552 [INFO][4331] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" iface="eth0" netns="/var/run/netns/cni-feb561cc-1da8-554c-6ca3-ad925365a80d" Sep 13 00:10:52.652394 containerd[1464]: 2025-09-13 00:10:52.552 [INFO][4331] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" iface="eth0" netns="/var/run/netns/cni-feb561cc-1da8-554c-6ca3-ad925365a80d" Sep 13 00:10:52.652394 containerd[1464]: 2025-09-13 00:10:52.552 [INFO][4331] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" iface="eth0" netns="/var/run/netns/cni-feb561cc-1da8-554c-6ca3-ad925365a80d" Sep 13 00:10:52.652394 containerd[1464]: 2025-09-13 00:10:52.552 [INFO][4331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Sep 13 00:10:52.652394 containerd[1464]: 2025-09-13 00:10:52.552 [INFO][4331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Sep 13 00:10:52.652394 containerd[1464]: 2025-09-13 00:10:52.629 [INFO][4356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" HandleID="k8s-pod-network.0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:10:52.652394 containerd[1464]: 2025-09-13 00:10:52.629 [INFO][4356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:52.652394 containerd[1464]: 2025-09-13 00:10:52.629 [INFO][4356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:52.652394 containerd[1464]: 2025-09-13 00:10:52.639 [WARNING][4356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" HandleID="k8s-pod-network.0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:10:52.652394 containerd[1464]: 2025-09-13 00:10:52.639 [INFO][4356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" HandleID="k8s-pod-network.0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:10:52.652394 containerd[1464]: 2025-09-13 00:10:52.643 [INFO][4356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:52.652394 containerd[1464]: 2025-09-13 00:10:52.645 [INFO][4331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Sep 13 00:10:52.653175 containerd[1464]: time="2025-09-13T00:10:52.652592777Z" level=info msg="TearDown network for sandbox \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\" successfully" Sep 13 00:10:52.653175 containerd[1464]: time="2025-09-13T00:10:52.652634045Z" level=info msg="StopPodSandbox for \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\" returns successfully" Sep 13 00:10:52.655392 containerd[1464]: time="2025-09-13T00:10:52.654997961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858b45798c-wf79q,Uid:a122eeee-1c48-4d4d-8b11-6b176f51757c,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:10:52.661089 systemd[1]: run-netns-cni\x2dfeb561cc\x2d1da8\x2d554c\x2d6ca3\x2dad925365a80d.mount: Deactivated successfully. Sep 13 00:10:52.838898 kubelet[2572]: I0913 00:10:52.836981 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7lkdb" podStartSLOduration=38.836951266 podStartE2EDuration="38.836951266s" podCreationTimestamp="2025-09-13 00:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:52.774181494 +0000 UTC m=+44.648198456" watchObservedRunningTime="2025-09-13 00:10:52.836951266 +0000 UTC m=+44.710968494" Sep 13 00:10:52.976612 systemd-networkd[1370]: calia95e263afa7: Link UP Sep 13 00:10:52.979978 systemd-networkd[1370]: calia95e263afa7: Gained carrier Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.691 [INFO][4361] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0 calico-kube-controllers-5b44c8449- calico-system cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089 960 0 2025-09-13 00:10:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b44c8449 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c calico-kube-controllers-5b44c8449-j7495 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia95e263afa7 [] [] }} ContainerID="5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" Namespace="calico-system" Pod="calico-kube-controllers-5b44c8449-j7495" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-" Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.693 [INFO][4361] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" Namespace="calico-system" Pod="calico-kube-controllers-5b44c8449-j7495" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.767 [INFO][4385] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" HandleID="k8s-pod-network.5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.769 [INFO][4385] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" HandleID="k8s-pod-network.5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", "pod":"calico-kube-controllers-5b44c8449-j7495", "timestamp":"2025-09-13 00:10:52.767640895 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.770 [INFO][4385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.770 [INFO][4385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.770 [INFO][4385] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c' Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.827 [INFO][4385] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.855 [INFO][4385] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.875 [INFO][4385] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.882 [INFO][4385] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.910 [INFO][4385] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.910 [INFO][4385] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.915 [INFO][4385] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.930 [INFO][4385] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.954 [INFO][4385] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.67/26] block=192.168.61.64/26 handle="k8s-pod-network.5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.954 [INFO][4385] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.67/26] handle="k8s-pod-network.5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.954 [INFO][4385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:53.021800 containerd[1464]: 2025-09-13 00:10:52.954 [INFO][4385] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.67/26] IPv6=[] ContainerID="5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" HandleID="k8s-pod-network.5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:10:53.022935 containerd[1464]: 2025-09-13 00:10:52.964 [INFO][4361] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" Namespace="calico-system" Pod="calico-kube-controllers-5b44c8449-j7495" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0", GenerateName:"calico-kube-controllers-5b44c8449-", Namespace:"calico-system", SelfLink:"", UID:"cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b44c8449", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"", Pod:"calico-kube-controllers-5b44c8449-j7495", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia95e263afa7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:53.022935 containerd[1464]: 2025-09-13 00:10:52.965 [INFO][4361] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.67/32] ContainerID="5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" Namespace="calico-system" Pod="calico-kube-controllers-5b44c8449-j7495" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:10:53.022935 containerd[1464]: 2025-09-13 00:10:52.965 [INFO][4361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia95e263afa7 ContainerID="5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" Namespace="calico-system" Pod="calico-kube-controllers-5b44c8449-j7495" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:10:53.022935 containerd[1464]: 2025-09-13 00:10:52.976 [INFO][4361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" Namespace="calico-system" Pod="calico-kube-controllers-5b44c8449-j7495" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:10:53.022935 containerd[1464]: 2025-09-13 00:10:52.977 [INFO][4361] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" Namespace="calico-system" Pod="calico-kube-controllers-5b44c8449-j7495" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0", GenerateName:"calico-kube-controllers-5b44c8449-", Namespace:"calico-system", SelfLink:"", UID:"cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b44c8449", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf", Pod:"calico-kube-controllers-5b44c8449-j7495", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia95e263afa7", MAC:"9a:a2:9a:c9:d7:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:53.022935 containerd[1464]: 2025-09-13 00:10:53.018 [INFO][4361] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf" Namespace="calico-system" Pod="calico-kube-controllers-5b44c8449-j7495" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:10:53.120193 containerd[1464]: time="2025-09-13T00:10:53.116798032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:53.120193 containerd[1464]: time="2025-09-13T00:10:53.118520209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:53.120193 containerd[1464]: time="2025-09-13T00:10:53.118555580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:53.120193 containerd[1464]: time="2025-09-13T00:10:53.118742162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:53.137742 systemd-networkd[1370]: cali414600e7f9e: Link UP Sep 13 00:10:53.153184 systemd-networkd[1370]: cali414600e7f9e: Gained carrier Sep 13 00:10:53.181582 systemd[1]: Started cri-containerd-5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf.scope - libcontainer container 5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf. Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:52.855 [INFO][4374] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0 calico-apiserver-858b45798c- calico-apiserver a122eeee-1c48-4d4d-8b11-6b176f51757c 961 0 2025-09-13 00:10:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:858b45798c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c calico-apiserver-858b45798c-wf79q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali414600e7f9e [] [] }} ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-wf79q" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-" Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:52.855 [INFO][4374] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-wf79q" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:52.947 [INFO][4394] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" HandleID="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:52.948 [INFO][4394] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" HandleID="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", "pod":"calico-apiserver-858b45798c-wf79q", "timestamp":"2025-09-13 00:10:52.947956137 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:52.948 [INFO][4394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:52.959 [INFO][4394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:52.959 [INFO][4394] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c' Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:53.002 [INFO][4394] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:53.015 [INFO][4394] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:53.035 [INFO][4394] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:53.041 [INFO][4394] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:53.055 [INFO][4394] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:53.055 [INFO][4394] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:53.061 [INFO][4394] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:53.073 [INFO][4394] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:53.109 [INFO][4394] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.68/26] block=192.168.61.64/26 handle="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:53.110 [INFO][4394] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.68/26] handle="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:53.110 [INFO][4394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:53.200112 containerd[1464]: 2025-09-13 00:10:53.110 [INFO][4394] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.68/26] IPv6=[] ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" HandleID="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:10:53.201690 containerd[1464]: 2025-09-13 00:10:53.121 [INFO][4374] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-wf79q" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0", GenerateName:"calico-apiserver-858b45798c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a122eeee-1c48-4d4d-8b11-6b176f51757c", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858b45798c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"", Pod:"calico-apiserver-858b45798c-wf79q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali414600e7f9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:53.201690 containerd[1464]: 2025-09-13 00:10:53.122 [INFO][4374] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.68/32] ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-wf79q" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:10:53.201690 containerd[1464]: 2025-09-13 00:10:53.122 [INFO][4374] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali414600e7f9e ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-wf79q" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:10:53.201690 containerd[1464]: 2025-09-13 00:10:53.164 [INFO][4374] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-wf79q" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:10:53.201690 containerd[1464]: 2025-09-13 00:10:53.166 [INFO][4374] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-wf79q" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0", GenerateName:"calico-apiserver-858b45798c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a122eeee-1c48-4d4d-8b11-6b176f51757c", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858b45798c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b", Pod:"calico-apiserver-858b45798c-wf79q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali414600e7f9e", MAC:"5a:84:53:f1:59:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:53.201690 containerd[1464]: 2025-09-13 00:10:53.194 [INFO][4374] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-wf79q" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:10:53.283541 containerd[1464]: time="2025-09-13T00:10:53.280355051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:53.283541 containerd[1464]: time="2025-09-13T00:10:53.280449231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:53.283541 containerd[1464]: time="2025-09-13T00:10:53.280476319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:53.283541 containerd[1464]: time="2025-09-13T00:10:53.280614079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:53.325516 systemd[1]: Started cri-containerd-68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b.scope - libcontainer container 68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b. Sep 13 00:10:53.331495 containerd[1464]: time="2025-09-13T00:10:53.331434055Z" level=info msg="StopPodSandbox for \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\"" Sep 13 00:10:53.333727 containerd[1464]: time="2025-09-13T00:10:53.332852793Z" level=info msg="StopPodSandbox for \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\"" Sep 13 00:10:53.338685 systemd-networkd[1370]: cali5543cbc496a: Gained IPv6LL Sep 13 00:10:53.607975 containerd[1464]: time="2025-09-13T00:10:53.607812351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b44c8449-j7495,Uid:cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089,Namespace:calico-system,Attempt:1,} returns sandbox id \"5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf\"" Sep 13 00:10:53.644944 containerd[1464]: time="2025-09-13T00:10:53.644887278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858b45798c-wf79q,Uid:a122eeee-1c48-4d4d-8b11-6b176f51757c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b\"" Sep 13 00:10:53.743400 containerd[1464]: 2025-09-13 00:10:53.597 [INFO][4504] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Sep 13 00:10:53.743400 containerd[1464]: 2025-09-13 00:10:53.600 [INFO][4504] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" iface="eth0" netns="/var/run/netns/cni-7c990463-5fa2-6b97-2e39-d49c5e8adb8b" Sep 13 00:10:53.743400 containerd[1464]: 2025-09-13 00:10:53.600 [INFO][4504] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" iface="eth0" netns="/var/run/netns/cni-7c990463-5fa2-6b97-2e39-d49c5e8adb8b" Sep 13 00:10:53.743400 containerd[1464]: 2025-09-13 00:10:53.601 [INFO][4504] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" iface="eth0" netns="/var/run/netns/cni-7c990463-5fa2-6b97-2e39-d49c5e8adb8b" Sep 13 00:10:53.743400 containerd[1464]: 2025-09-13 00:10:53.601 [INFO][4504] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Sep 13 00:10:53.743400 containerd[1464]: 2025-09-13 00:10:53.601 [INFO][4504] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Sep 13 00:10:53.743400 containerd[1464]: 2025-09-13 00:10:53.710 [INFO][4527] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" HandleID="k8s-pod-network.175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:10:53.743400 containerd[1464]: 2025-09-13 00:10:53.710 [INFO][4527] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:53.743400 containerd[1464]: 2025-09-13 00:10:53.710 [INFO][4527] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:53.743400 containerd[1464]: 2025-09-13 00:10:53.732 [WARNING][4527] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" HandleID="k8s-pod-network.175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:10:53.743400 containerd[1464]: 2025-09-13 00:10:53.732 [INFO][4527] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" HandleID="k8s-pod-network.175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:10:53.743400 containerd[1464]: 2025-09-13 00:10:53.734 [INFO][4527] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:53.743400 containerd[1464]: 2025-09-13 00:10:53.740 [INFO][4504] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Sep 13 00:10:53.746794 containerd[1464]: time="2025-09-13T00:10:53.743576699Z" level=info msg="TearDown network for sandbox \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\" successfully" Sep 13 00:10:53.746794 containerd[1464]: time="2025-09-13T00:10:53.743614741Z" level=info msg="StopPodSandbox for \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\" returns successfully" Sep 13 00:10:53.746794 containerd[1464]: time="2025-09-13T00:10:53.744492618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6sxxd,Uid:6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f,Namespace:kube-system,Attempt:1,}" Sep 13 00:10:53.760406 systemd[1]: run-netns-cni\x2d7c990463\x2d5fa2\x2d6b97\x2d2e39\x2dd49c5e8adb8b.mount: Deactivated successfully. Sep 13 00:10:53.811125 containerd[1464]: 2025-09-13 00:10:53.681 [INFO][4505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Sep 13 00:10:53.811125 containerd[1464]: 2025-09-13 00:10:53.682 [INFO][4505] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" iface="eth0" netns="/var/run/netns/cni-b208a05c-a506-d32d-7b25-7c3d30208c26" Sep 13 00:10:53.811125 containerd[1464]: 2025-09-13 00:10:53.683 [INFO][4505] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" iface="eth0" netns="/var/run/netns/cni-b208a05c-a506-d32d-7b25-7c3d30208c26" Sep 13 00:10:53.811125 containerd[1464]: 2025-09-13 00:10:53.683 [INFO][4505] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" iface="eth0" netns="/var/run/netns/cni-b208a05c-a506-d32d-7b25-7c3d30208c26" Sep 13 00:10:53.811125 containerd[1464]: 2025-09-13 00:10:53.683 [INFO][4505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Sep 13 00:10:53.811125 containerd[1464]: 2025-09-13 00:10:53.684 [INFO][4505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Sep 13 00:10:53.811125 containerd[1464]: 2025-09-13 00:10:53.772 [INFO][4540] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" HandleID="k8s-pod-network.0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:10:53.811125 containerd[1464]: 2025-09-13 00:10:53.776 [INFO][4540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:53.811125 containerd[1464]: 2025-09-13 00:10:53.777 [INFO][4540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:53.811125 containerd[1464]: 2025-09-13 00:10:53.801 [WARNING][4540] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" HandleID="k8s-pod-network.0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:10:53.811125 containerd[1464]: 2025-09-13 00:10:53.801 [INFO][4540] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" HandleID="k8s-pod-network.0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:10:53.811125 containerd[1464]: 2025-09-13 00:10:53.804 [INFO][4540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:53.811125 containerd[1464]: 2025-09-13 00:10:53.808 [INFO][4505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Sep 13 00:10:53.811919 containerd[1464]: time="2025-09-13T00:10:53.811325457Z" level=info msg="TearDown network for sandbox \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\" successfully" Sep 13 00:10:53.811919 containerd[1464]: time="2025-09-13T00:10:53.811364495Z" level=info msg="StopPodSandbox for \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\" returns successfully" Sep 13 00:10:53.813353 containerd[1464]: time="2025-09-13T00:10:53.812190614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xlg2k,Uid:7e85b06c-dd91-46b3-8259-324d7b213ab5,Namespace:calico-system,Attempt:1,}" Sep 13 00:10:53.823165 systemd[1]: run-netns-cni\x2db208a05c\x2da506\x2dd32d\x2d7b25\x2d7c3d30208c26.mount: Deactivated successfully. Sep 13 00:10:54.097193 systemd-networkd[1370]: cali0b443dab7b4: Link UP Sep 13 00:10:54.099209 systemd-networkd[1370]: cali0b443dab7b4: Gained carrier Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:53.951 [INFO][4557] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0 goldmane-54d579b49d- calico-system 7e85b06c-dd91-46b3-8259-324d7b213ab5 983 0 2025-09-13 00:10:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c goldmane-54d579b49d-xlg2k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0b443dab7b4 [] [] }} ContainerID="333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" Namespace="calico-system" Pod="goldmane-54d579b49d-xlg2k" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-" Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:53.952 [INFO][4557] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" Namespace="calico-system" Pod="goldmane-54d579b49d-xlg2k" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.017 [INFO][4580] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" HandleID="k8s-pod-network.333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.018 [INFO][4580] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" HandleID="k8s-pod-network.333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5970), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", "pod":"goldmane-54d579b49d-xlg2k", "timestamp":"2025-09-13 00:10:54.017876959 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.018 [INFO][4580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.018 [INFO][4580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.018 [INFO][4580] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c' Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.035 [INFO][4580] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.048 [INFO][4580] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.055 [INFO][4580] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.058 [INFO][4580] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.062 [INFO][4580] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.063 [INFO][4580] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.065 [INFO][4580] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.073 [INFO][4580] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.086 [INFO][4580] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.69/26] block=192.168.61.64/26 handle="k8s-pod-network.333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.086 [INFO][4580] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.69/26] handle="k8s-pod-network.333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.086 [INFO][4580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:54.133955 containerd[1464]: 2025-09-13 00:10:54.086 [INFO][4580] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.69/26] IPv6=[] ContainerID="333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" HandleID="k8s-pod-network.333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:10:54.138517 containerd[1464]: 2025-09-13 00:10:54.092 [INFO][4557] cni-plugin/k8s.go 418: Populated endpoint ContainerID="333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" Namespace="calico-system" Pod="goldmane-54d579b49d-xlg2k" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"7e85b06c-dd91-46b3-8259-324d7b213ab5", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"", Pod:"goldmane-54d579b49d-xlg2k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b443dab7b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:54.138517 containerd[1464]: 2025-09-13 00:10:54.092 [INFO][4557] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.69/32] ContainerID="333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" Namespace="calico-system" Pod="goldmane-54d579b49d-xlg2k" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:10:54.138517 containerd[1464]: 2025-09-13 00:10:54.092 [INFO][4557] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b443dab7b4 ContainerID="333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" Namespace="calico-system" Pod="goldmane-54d579b49d-xlg2k" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:10:54.138517 containerd[1464]: 2025-09-13 00:10:54.100 [INFO][4557] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" Namespace="calico-system" Pod="goldmane-54d579b49d-xlg2k" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:10:54.138517 containerd[1464]: 2025-09-13 00:10:54.100 [INFO][4557] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" Namespace="calico-system" Pod="goldmane-54d579b49d-xlg2k" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"7e85b06c-dd91-46b3-8259-324d7b213ab5", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de", Pod:"goldmane-54d579b49d-xlg2k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b443dab7b4", MAC:"7a:42:58:e7:ad:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:54.138517 containerd[1464]: 2025-09-13 00:10:54.127 [INFO][4557] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de" Namespace="calico-system" Pod="goldmane-54d579b49d-xlg2k" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:10:54.226269 containerd[1464]: time="2025-09-13T00:10:54.224197587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:54.226269 containerd[1464]: time="2025-09-13T00:10:54.224720456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:54.226269 containerd[1464]: time="2025-09-13T00:10:54.224786332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:54.232407 containerd[1464]: time="2025-09-13T00:10:54.231936991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:54.239027 systemd-networkd[1370]: cali0cc4af40c8e: Link UP Sep 13 00:10:54.242088 systemd-networkd[1370]: cali0cc4af40c8e: Gained carrier Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:53.909 [INFO][4548] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0 coredns-668d6bf9bc- kube-system 6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f 980 0 2025-09-13 00:10:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c coredns-668d6bf9bc-6sxxd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0cc4af40c8e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" Namespace="kube-system" Pod="coredns-668d6bf9bc-6sxxd" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-" Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:53.910 [INFO][4548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" Namespace="kube-system" Pod="coredns-668d6bf9bc-6sxxd" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.017 [INFO][4568] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" HandleID="k8s-pod-network.8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.022 [INFO][4568] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" HandleID="k8s-pod-network.8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333270), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", "pod":"coredns-668d6bf9bc-6sxxd", "timestamp":"2025-09-13 00:10:54.017885038 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.022 [INFO][4568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.086 [INFO][4568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.087 [INFO][4568] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c' Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.140 [INFO][4568] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.156 [INFO][4568] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.170 [INFO][4568] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.176 [INFO][4568] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.181 [INFO][4568] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.181 [INFO][4568] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.189 [INFO][4568] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706 Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.199 [INFO][4568] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.217 [INFO][4568] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.70/26] block=192.168.61.64/26 handle="k8s-pod-network.8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.217 [INFO][4568] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.70/26] handle="k8s-pod-network.8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.217 [INFO][4568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:54.291118 containerd[1464]: 2025-09-13 00:10:54.217 [INFO][4568] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.70/26] IPv6=[] ContainerID="8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" HandleID="k8s-pod-network.8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:10:54.293518 containerd[1464]: 2025-09-13 00:10:54.221 [INFO][4548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" Namespace="kube-system" Pod="coredns-668d6bf9bc-6sxxd" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"", Pod:"coredns-668d6bf9bc-6sxxd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0cc4af40c8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:54.293518 containerd[1464]: 2025-09-13 00:10:54.223 [INFO][4548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.70/32] ContainerID="8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" Namespace="kube-system" Pod="coredns-668d6bf9bc-6sxxd" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:10:54.293518 containerd[1464]: 2025-09-13 00:10:54.224 [INFO][4548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0cc4af40c8e ContainerID="8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" Namespace="kube-system" Pod="coredns-668d6bf9bc-6sxxd" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:10:54.293518 containerd[1464]: 2025-09-13 00:10:54.243 [INFO][4548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" Namespace="kube-system" Pod="coredns-668d6bf9bc-6sxxd" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:10:54.293518 containerd[1464]: 2025-09-13 00:10:54.245 [INFO][4548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" Namespace="kube-system" Pod="coredns-668d6bf9bc-6sxxd" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706", Pod:"coredns-668d6bf9bc-6sxxd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0cc4af40c8e", MAC:"82:23:c2:1f:71:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:54.293518 containerd[1464]: 2025-09-13 00:10:54.279 [INFO][4548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706" Namespace="kube-system" Pod="coredns-668d6bf9bc-6sxxd" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:10:54.325140 systemd[1]: Started cri-containerd-333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de.scope - libcontainer container 333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de. Sep 13 00:10:54.332510 containerd[1464]: time="2025-09-13T00:10:54.332453668Z" level=info msg="StopPodSandbox for \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\"" Sep 13 00:10:54.410294 containerd[1464]: time="2025-09-13T00:10:54.406197965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:54.410294 containerd[1464]: time="2025-09-13T00:10:54.408952962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:54.410294 containerd[1464]: time="2025-09-13T00:10:54.408976964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:54.410294 containerd[1464]: time="2025-09-13T00:10:54.409120603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:54.462962 systemd[1]: Started cri-containerd-8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706.scope - libcontainer container 8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706. Sep 13 00:10:54.547090 containerd[1464]: time="2025-09-13T00:10:54.547014432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xlg2k,Uid:7e85b06c-dd91-46b3-8259-324d7b213ab5,Namespace:calico-system,Attempt:1,} returns sandbox id \"333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de\"" Sep 13 00:10:54.646829 containerd[1464]: time="2025-09-13T00:10:54.646773297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6sxxd,Uid:6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f,Namespace:kube-system,Attempt:1,} returns sandbox id \"8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706\"" Sep 13 00:10:54.678381 containerd[1464]: time="2025-09-13T00:10:54.676955030Z" level=info msg="CreateContainer within sandbox \"8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:10:54.730737 containerd[1464]: time="2025-09-13T00:10:54.730676021Z" level=info msg="CreateContainer within sandbox \"8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d51a5b0ff20c6b67471f9340db030406ba60430d5a68fbb6a24a7cd8f09b38c\"" Sep 13 00:10:54.733623 containerd[1464]: time="2025-09-13T00:10:54.733576283Z" level=info msg="StartContainer for \"0d51a5b0ff20c6b67471f9340db030406ba60430d5a68fbb6a24a7cd8f09b38c\"" Sep 13 00:10:54.757449 containerd[1464]: 2025-09-13 00:10:54.634 [INFO][4656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Sep 13 00:10:54.757449 containerd[1464]: 2025-09-13 00:10:54.635 [INFO][4656] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" iface="eth0" netns="/var/run/netns/cni-c559bf65-9cf4-763e-1994-5e9d1615b1f7" Sep 13 00:10:54.757449 containerd[1464]: 2025-09-13 00:10:54.636 [INFO][4656] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" iface="eth0" netns="/var/run/netns/cni-c559bf65-9cf4-763e-1994-5e9d1615b1f7" Sep 13 00:10:54.757449 containerd[1464]: 2025-09-13 00:10:54.638 [INFO][4656] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" iface="eth0" netns="/var/run/netns/cni-c559bf65-9cf4-763e-1994-5e9d1615b1f7" Sep 13 00:10:54.757449 containerd[1464]: 2025-09-13 00:10:54.638 [INFO][4656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Sep 13 00:10:54.757449 containerd[1464]: 2025-09-13 00:10:54.639 [INFO][4656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Sep 13 00:10:54.757449 containerd[1464]: 2025-09-13 00:10:54.709 [INFO][4708] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" HandleID="k8s-pod-network.e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:10:54.757449 containerd[1464]: 2025-09-13 00:10:54.710 [INFO][4708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:54.757449 containerd[1464]: 2025-09-13 00:10:54.710 [INFO][4708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:54.757449 containerd[1464]: 2025-09-13 00:10:54.731 [WARNING][4708] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" HandleID="k8s-pod-network.e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:10:54.757449 containerd[1464]: 2025-09-13 00:10:54.738 [INFO][4708] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" HandleID="k8s-pod-network.e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:10:54.757449 containerd[1464]: 2025-09-13 00:10:54.745 [INFO][4708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:54.757449 containerd[1464]: 2025-09-13 00:10:54.754 [INFO][4656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Sep 13 00:10:54.760887 containerd[1464]: time="2025-09-13T00:10:54.758098501Z" level=info msg="TearDown network for sandbox \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\" successfully" Sep 13 00:10:54.760887 containerd[1464]: time="2025-09-13T00:10:54.758136020Z" level=info msg="StopPodSandbox for \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\" returns successfully" Sep 13 00:10:54.765361 containerd[1464]: time="2025-09-13T00:10:54.763741776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-st7d7,Uid:dbe2014e-59d9-4e26-9bc7-323114f09c1f,Namespace:calico-system,Attempt:1,}" Sep 13 00:10:54.768412 systemd[1]: run-netns-cni\x2dc559bf65\x2d9cf4\x2d763e\x2d1994\x2d5e9d1615b1f7.mount: Deactivated successfully. Sep 13 00:10:54.841107 systemd[1]: Started cri-containerd-0d51a5b0ff20c6b67471f9340db030406ba60430d5a68fbb6a24a7cd8f09b38c.scope - libcontainer container 0d51a5b0ff20c6b67471f9340db030406ba60430d5a68fbb6a24a7cd8f09b38c. Sep 13 00:10:54.945617 containerd[1464]: time="2025-09-13T00:10:54.945550418Z" level=info msg="StartContainer for \"0d51a5b0ff20c6b67471f9340db030406ba60430d5a68fbb6a24a7cd8f09b38c\" returns successfully" Sep 13 00:10:55.002150 systemd-networkd[1370]: calia95e263afa7: Gained IPv6LL Sep 13 00:10:55.067289 systemd-networkd[1370]: cali414600e7f9e: Gained IPv6LL Sep 13 00:10:55.128086 systemd-networkd[1370]: cali8c727f2642c: Link UP Sep 13 00:10:55.128533 systemd-networkd[1370]: cali8c727f2642c: Gained carrier Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:54.903 [INFO][4724] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0 csi-node-driver- calico-system dbe2014e-59d9-4e26-9bc7-323114f09c1f 993 0 2025-09-13 00:10:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c csi-node-driver-st7d7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8c727f2642c [] [] }} ContainerID="4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" Namespace="calico-system" Pod="csi-node-driver-st7d7" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-" Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:54.903 [INFO][4724] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" Namespace="calico-system" Pod="csi-node-driver-st7d7" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:54.980 [INFO][4754] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" HandleID="k8s-pod-network.4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:54.981 [INFO][4754] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" HandleID="k8s-pod-network.4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", "pod":"csi-node-driver-st7d7", "timestamp":"2025-09-13 00:10:54.979969875 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:54.981 [INFO][4754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:54.981 [INFO][4754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:54.981 [INFO][4754] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c' Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:54.999 [INFO][4754] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:55.014 [INFO][4754] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:55.061 [INFO][4754] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:55.072 [INFO][4754] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:55.078 [INFO][4754] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:55.079 [INFO][4754] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:55.083 [INFO][4754] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45 Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:55.090 [INFO][4754] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:55.106 [INFO][4754] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.71/26] block=192.168.61.64/26 handle="k8s-pod-network.4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:55.107 [INFO][4754] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.71/26] handle="k8s-pod-network.4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:55.107 [INFO][4754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:55.171746 containerd[1464]: 2025-09-13 00:10:55.107 [INFO][4754] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.71/26] IPv6=[] ContainerID="4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" HandleID="k8s-pod-network.4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:10:55.174901 containerd[1464]: 2025-09-13 00:10:55.115 [INFO][4724] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" Namespace="calico-system" Pod="csi-node-driver-st7d7" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbe2014e-59d9-4e26-9bc7-323114f09c1f", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"", Pod:"csi-node-driver-st7d7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c727f2642c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:55.174901 containerd[1464]: 2025-09-13 00:10:55.116 [INFO][4724] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.71/32] ContainerID="4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" Namespace="calico-system" Pod="csi-node-driver-st7d7" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:10:55.174901 containerd[1464]: 2025-09-13 00:10:55.116 [INFO][4724] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c727f2642c ContainerID="4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" Namespace="calico-system" Pod="csi-node-driver-st7d7" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:10:55.174901 containerd[1464]: 2025-09-13 00:10:55.127 [INFO][4724] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" Namespace="calico-system" Pod="csi-node-driver-st7d7" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:10:55.174901 containerd[1464]: 2025-09-13 00:10:55.128 [INFO][4724] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" Namespace="calico-system" Pod="csi-node-driver-st7d7" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbe2014e-59d9-4e26-9bc7-323114f09c1f", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45", Pod:"csi-node-driver-st7d7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c727f2642c", MAC:"b6:72:57:2e:3d:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:55.174901 containerd[1464]: 2025-09-13 00:10:55.159 [INFO][4724] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45" Namespace="calico-system" Pod="csi-node-driver-st7d7" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:10:55.246386 containerd[1464]: time="2025-09-13T00:10:55.245362714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:55.246386 containerd[1464]: time="2025-09-13T00:10:55.245472020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:55.246386 containerd[1464]: time="2025-09-13T00:10:55.245495718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:55.250146 containerd[1464]: time="2025-09-13T00:10:55.248041155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:55.306619 systemd[1]: Started cri-containerd-4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45.scope - libcontainer container 4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45. Sep 13 00:10:55.330820 containerd[1464]: time="2025-09-13T00:10:55.330758503Z" level=info msg="StopPodSandbox for \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\"" Sep 13 00:10:55.483562 containerd[1464]: time="2025-09-13T00:10:55.483488275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-st7d7,Uid:dbe2014e-59d9-4e26-9bc7-323114f09c1f,Namespace:calico-system,Attempt:1,} returns sandbox id \"4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45\"" Sep 13 00:10:55.642918 systemd-networkd[1370]: cali0b443dab7b4: Gained IPv6LL Sep 13 00:10:55.647412 containerd[1464]: 2025-09-13 00:10:55.554 [INFO][4823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Sep 13 00:10:55.647412 containerd[1464]: 2025-09-13 00:10:55.557 [INFO][4823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" iface="eth0" netns="/var/run/netns/cni-19a4bd0c-2543-6394-0e66-c97eff4b0a6d" Sep 13 00:10:55.647412 containerd[1464]: 2025-09-13 00:10:55.558 [INFO][4823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" iface="eth0" netns="/var/run/netns/cni-19a4bd0c-2543-6394-0e66-c97eff4b0a6d" Sep 13 00:10:55.647412 containerd[1464]: 2025-09-13 00:10:55.558 [INFO][4823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" iface="eth0" netns="/var/run/netns/cni-19a4bd0c-2543-6394-0e66-c97eff4b0a6d" Sep 13 00:10:55.647412 containerd[1464]: 2025-09-13 00:10:55.558 [INFO][4823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Sep 13 00:10:55.647412 containerd[1464]: 2025-09-13 00:10:55.558 [INFO][4823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Sep 13 00:10:55.647412 containerd[1464]: 2025-09-13 00:10:55.620 [INFO][4839] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" HandleID="k8s-pod-network.a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:10:55.647412 containerd[1464]: 2025-09-13 00:10:55.621 [INFO][4839] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:55.647412 containerd[1464]: 2025-09-13 00:10:55.621 [INFO][4839] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:55.647412 containerd[1464]: 2025-09-13 00:10:55.633 [WARNING][4839] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" HandleID="k8s-pod-network.a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:10:55.647412 containerd[1464]: 2025-09-13 00:10:55.633 [INFO][4839] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" HandleID="k8s-pod-network.a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:10:55.647412 containerd[1464]: 2025-09-13 00:10:55.636 [INFO][4839] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:55.647412 containerd[1464]: 2025-09-13 00:10:55.640 [INFO][4823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Sep 13 00:10:55.651211 containerd[1464]: time="2025-09-13T00:10:55.650381241Z" level=info msg="TearDown network for sandbox \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\" successfully" Sep 13 00:10:55.651211 containerd[1464]: time="2025-09-13T00:10:55.650426973Z" level=info msg="StopPodSandbox for \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\" returns successfully" Sep 13 00:10:55.656395 systemd[1]: run-netns-cni\x2d19a4bd0c\x2d2543\x2d6394\x2d0e66\x2dc97eff4b0a6d.mount: Deactivated successfully. Sep 13 00:10:55.659141 containerd[1464]: time="2025-09-13T00:10:55.659095325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b67c64b5-zwfwq,Uid:1b28393a-bc83-414b-b383-70eb716d64ee,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:10:55.817678 kubelet[2572]: I0913 00:10:55.817293 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6sxxd" podStartSLOduration=41.817262804 podStartE2EDuration="41.817262804s" podCreationTimestamp="2025-09-13 00:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:55.812367348 +0000 UTC m=+47.686384311" watchObservedRunningTime="2025-09-13 00:10:55.817262804 +0000 UTC m=+47.691279762" Sep 13 00:10:56.031451 systemd-networkd[1370]: cali5d4006ff1a6: Link UP Sep 13 00:10:56.031877 systemd-networkd[1370]: cali5d4006ff1a6: Gained carrier Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.772 [INFO][4847] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0 calico-apiserver-77b67c64b5- calico-apiserver 1b28393a-bc83-414b-b383-70eb716d64ee 1004 0 2025-09-13 00:10:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77b67c64b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c calico-apiserver-77b67c64b5-zwfwq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5d4006ff1a6 [] [] }} ContainerID="3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-zwfwq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-" Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.772 [INFO][4847] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-zwfwq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.903 [INFO][4860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" HandleID="k8s-pod-network.3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.904 [INFO][4860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" HandleID="k8s-pod-network.3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", "pod":"calico-apiserver-77b67c64b5-zwfwq", "timestamp":"2025-09-13 00:10:55.903005284 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.904 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.905 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.905 [INFO][4860] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c' Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.923 [INFO][4860] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.937 [INFO][4860] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.952 [INFO][4860] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.957 [INFO][4860] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.963 [INFO][4860] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.963 [INFO][4860] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.971 [INFO][4860] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101 Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.982 [INFO][4860] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.999 [INFO][4860] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.72/26] block=192.168.61.64/26 handle="k8s-pod-network.3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.999 [INFO][4860] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.72/26] handle="k8s-pod-network.3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.999 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:56.083435 containerd[1464]: 2025-09-13 00:10:55.999 [INFO][4860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.72/26] IPv6=[] ContainerID="3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" HandleID="k8s-pod-network.3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:10:56.087782 containerd[1464]: 2025-09-13 00:10:56.011 [INFO][4847] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-zwfwq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0", GenerateName:"calico-apiserver-77b67c64b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b28393a-bc83-414b-b383-70eb716d64ee", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b67c64b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"", Pod:"calico-apiserver-77b67c64b5-zwfwq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d4006ff1a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:56.087782 containerd[1464]: 2025-09-13 00:10:56.012 [INFO][4847] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.72/32] ContainerID="3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-zwfwq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:10:56.087782 containerd[1464]: 2025-09-13 00:10:56.013 [INFO][4847] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d4006ff1a6 ContainerID="3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-zwfwq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:10:56.087782 containerd[1464]: 2025-09-13 00:10:56.035 [INFO][4847] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-zwfwq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:10:56.087782 containerd[1464]: 2025-09-13 00:10:56.040 [INFO][4847] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-zwfwq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0", GenerateName:"calico-apiserver-77b67c64b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b28393a-bc83-414b-b383-70eb716d64ee", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b67c64b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101", Pod:"calico-apiserver-77b67c64b5-zwfwq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d4006ff1a6", MAC:"76:9d:9d:bb:de:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:56.087782 containerd[1464]: 2025-09-13 00:10:56.077 [INFO][4847] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-zwfwq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:10:56.172564 containerd[1464]: time="2025-09-13T00:10:56.169910966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:56.172564 containerd[1464]: time="2025-09-13T00:10:56.171899592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:56.172564 containerd[1464]: time="2025-09-13T00:10:56.171926434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:56.172847 containerd[1464]: time="2025-09-13T00:10:56.172449045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:56.218867 systemd-networkd[1370]: cali0cc4af40c8e: Gained IPv6LL Sep 13 00:10:56.260553 systemd[1]: Started cri-containerd-3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101.scope - libcontainer container 3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101. Sep 13 00:10:56.333394 containerd[1464]: time="2025-09-13T00:10:56.333159135Z" level=info msg="StopPodSandbox for \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\"" Sep 13 00:10:56.516793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2048001946.mount: Deactivated successfully. Sep 13 00:10:56.548589 containerd[1464]: time="2025-09-13T00:10:56.548327653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:56.552745 containerd[1464]: time="2025-09-13T00:10:56.552664219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:10:56.555895 containerd[1464]: time="2025-09-13T00:10:56.555836913Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:56.573691 containerd[1464]: time="2025-09-13T00:10:56.573596392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:56.587789 containerd[1464]: time="2025-09-13T00:10:56.586415085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.343305001s" Sep 13 00:10:56.587789 containerd[1464]: time="2025-09-13T00:10:56.586535641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:10:56.593906 containerd[1464]: time="2025-09-13T00:10:56.593836604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:10:56.601846 containerd[1464]: time="2025-09-13T00:10:56.601788372Z" level=info msg="CreateContainer within sandbox \"4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:10:56.650267 containerd[1464]: time="2025-09-13T00:10:56.649551757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b67c64b5-zwfwq,Uid:1b28393a-bc83-414b-b383-70eb716d64ee,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101\"" Sep 13 00:10:56.649609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3126086269.mount: Deactivated successfully. Sep 13 00:10:56.664444 containerd[1464]: time="2025-09-13T00:10:56.662523340Z" level=info msg="CreateContainer within sandbox \"4559ff3a2b4627f6ae603490f449163968be8cf1fc5d29eb3f21b355b1f86ee2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a57701f154e25642d06592325a09d81033ef1b2a05f9a6e46541c2541f2a71e9\"" Sep 13 00:10:56.666968 containerd[1464]: time="2025-09-13T00:10:56.665985685Z" level=info msg="StartContainer for \"a57701f154e25642d06592325a09d81033ef1b2a05f9a6e46541c2541f2a71e9\"" Sep 13 00:10:56.682212 containerd[1464]: 2025-09-13 00:10:56.514 [INFO][4921] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Sep 13 00:10:56.682212 containerd[1464]: 2025-09-13 00:10:56.514 [INFO][4921] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" iface="eth0" netns="/var/run/netns/cni-cb2d2394-14d4-af73-b7af-db313711d217" Sep 13 00:10:56.682212 containerd[1464]: 2025-09-13 00:10:56.518 [INFO][4921] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" iface="eth0" netns="/var/run/netns/cni-cb2d2394-14d4-af73-b7af-db313711d217" Sep 13 00:10:56.682212 containerd[1464]: 2025-09-13 00:10:56.523 [INFO][4921] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" iface="eth0" netns="/var/run/netns/cni-cb2d2394-14d4-af73-b7af-db313711d217" Sep 13 00:10:56.682212 containerd[1464]: 2025-09-13 00:10:56.524 [INFO][4921] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Sep 13 00:10:56.682212 containerd[1464]: 2025-09-13 00:10:56.525 [INFO][4921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Sep 13 00:10:56.682212 containerd[1464]: 2025-09-13 00:10:56.636 [INFO][4933] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" HandleID="k8s-pod-network.700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:10:56.682212 containerd[1464]: 2025-09-13 00:10:56.643 [INFO][4933] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:56.682212 containerd[1464]: 2025-09-13 00:10:56.643 [INFO][4933] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:56.682212 containerd[1464]: 2025-09-13 00:10:56.664 [WARNING][4933] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" HandleID="k8s-pod-network.700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:10:56.682212 containerd[1464]: 2025-09-13 00:10:56.664 [INFO][4933] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" HandleID="k8s-pod-network.700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:10:56.682212 containerd[1464]: 2025-09-13 00:10:56.671 [INFO][4933] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:56.682212 containerd[1464]: 2025-09-13 00:10:56.678 [INFO][4921] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Sep 13 00:10:56.686276 containerd[1464]: time="2025-09-13T00:10:56.686172666Z" level=info msg="TearDown network for sandbox \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\" successfully" Sep 13 00:10:56.687159 containerd[1464]: time="2025-09-13T00:10:56.686220865Z" level=info msg="StopPodSandbox for \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\" returns successfully" Sep 13 00:10:56.691594 containerd[1464]: time="2025-09-13T00:10:56.690599434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858b45798c-lxwx2,Uid:4bc4ce4d-5c95-4963-89e0-e92aa88cde6e,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:10:56.692775 systemd[1]: run-netns-cni\x2dcb2d2394\x2d14d4\x2daf73\x2db7af\x2ddb313711d217.mount: Deactivated successfully. Sep 13 00:10:56.778537 systemd[1]: Started cri-containerd-a57701f154e25642d06592325a09d81033ef1b2a05f9a6e46541c2541f2a71e9.scope - libcontainer container a57701f154e25642d06592325a09d81033ef1b2a05f9a6e46541c2541f2a71e9. Sep 13 00:10:56.922593 systemd-networkd[1370]: cali8c727f2642c: Gained IPv6LL Sep 13 00:10:57.086381 systemd-networkd[1370]: calid877255ac7b: Link UP Sep 13 00:10:57.088769 systemd-networkd[1370]: calid877255ac7b: Gained carrier Sep 13 00:10:57.112609 containerd[1464]: time="2025-09-13T00:10:57.112551256Z" level=info msg="StartContainer for \"a57701f154e25642d06592325a09d81033ef1b2a05f9a6e46541c2541f2a71e9\" returns successfully" Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:56.919 [INFO][4963] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0 calico-apiserver-858b45798c- calico-apiserver 4bc4ce4d-5c95-4963-89e0-e92aa88cde6e 1020 0 2025-09-13 00:10:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:858b45798c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c calico-apiserver-858b45798c-lxwx2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid877255ac7b [] [] }} ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-lxwx2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-" Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:56.919 [INFO][4963] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-lxwx2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:56.985 [INFO][4990] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" HandleID="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:56.986 [INFO][4990] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" HandleID="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fa20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", "pod":"calico-apiserver-858b45798c-lxwx2", "timestamp":"2025-09-13 00:10:56.98576138 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:56.986 [INFO][4990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:56.986 [INFO][4990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:56.986 [INFO][4990] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c' Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:57.001 [INFO][4990] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:57.008 [INFO][4990] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:57.018 [INFO][4990] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:57.022 [INFO][4990] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:57.029 [INFO][4990] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:57.029 [INFO][4990] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:57.032 [INFO][4990] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735 Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:57.046 [INFO][4990] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:57.062 [INFO][4990] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.73/26] block=192.168.61.64/26 handle="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:57.062 [INFO][4990] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.73/26] handle="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:57.062 [INFO][4990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:57.127857 containerd[1464]: 2025-09-13 00:10:57.062 [INFO][4990] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.73/26] IPv6=[] ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" HandleID="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:10:57.129217 containerd[1464]: 2025-09-13 00:10:57.069 [INFO][4963] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-lxwx2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0", GenerateName:"calico-apiserver-858b45798c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bc4ce4d-5c95-4963-89e0-e92aa88cde6e", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858b45798c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"", Pod:"calico-apiserver-858b45798c-lxwx2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid877255ac7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:57.129217 containerd[1464]: 2025-09-13 00:10:57.070 [INFO][4963] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.73/32] ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-lxwx2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:10:57.129217 containerd[1464]: 2025-09-13 00:10:57.070 [INFO][4963] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid877255ac7b ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-lxwx2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:10:57.129217 containerd[1464]: 2025-09-13 00:10:57.089 [INFO][4963] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-lxwx2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:10:57.129217 containerd[1464]: 2025-09-13 00:10:57.092 [INFO][4963] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-lxwx2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0", GenerateName:"calico-apiserver-858b45798c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bc4ce4d-5c95-4963-89e0-e92aa88cde6e", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858b45798c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735", Pod:"calico-apiserver-858b45798c-lxwx2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid877255ac7b", MAC:"5e:bd:b3:08:00:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:57.129217 containerd[1464]: 2025-09-13 00:10:57.124 [INFO][4963] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Namespace="calico-apiserver" Pod="calico-apiserver-858b45798c-lxwx2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:10:57.163003 containerd[1464]: time="2025-09-13T00:10:57.162420801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:57.163003 containerd[1464]: time="2025-09-13T00:10:57.162500951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:57.163003 containerd[1464]: time="2025-09-13T00:10:57.162539324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:57.163003 containerd[1464]: time="2025-09-13T00:10:57.162708421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:57.204380 systemd[1]: Started cri-containerd-f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735.scope - libcontainer container f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735. Sep 13 00:10:57.307777 containerd[1464]: time="2025-09-13T00:10:57.307715068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858b45798c-lxwx2,Uid:4bc4ce4d-5c95-4963-89e0-e92aa88cde6e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735\"" Sep 13 00:10:57.818849 systemd-networkd[1370]: cali5d4006ff1a6: Gained IPv6LL Sep 13 00:10:57.850370 kubelet[2572]: I0913 00:10:57.848884 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-78f74f95fd-p4ctr" podStartSLOduration=2.8881055939999998 podStartE2EDuration="8.8488543s" podCreationTimestamp="2025-09-13 00:10:49 +0000 UTC" firstStartedPulling="2025-09-13 00:10:50.629301866 +0000 UTC m=+42.503318822" lastFinishedPulling="2025-09-13 00:10:56.590050576 +0000 UTC m=+48.464067528" observedRunningTime="2025-09-13 00:10:57.844887734 +0000 UTC m=+49.718904696" watchObservedRunningTime="2025-09-13 00:10:57.8488543 +0000 UTC m=+49.722871263" Sep 13 00:10:59.036367 systemd-networkd[1370]: calid877255ac7b: Gained IPv6LL Sep 13 00:10:59.735304 containerd[1464]: time="2025-09-13T00:10:59.735206792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:59.737513 containerd[1464]: time="2025-09-13T00:10:59.736960067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:10:59.738808 containerd[1464]: time="2025-09-13T00:10:59.738744136Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:59.744543 containerd[1464]: time="2025-09-13T00:10:59.744273084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:59.746401 containerd[1464]: time="2025-09-13T00:10:59.745382124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.151481669s" Sep 13 00:10:59.746401 containerd[1464]: time="2025-09-13T00:10:59.745431881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:10:59.748906 containerd[1464]: time="2025-09-13T00:10:59.748709335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:10:59.770300 containerd[1464]: time="2025-09-13T00:10:59.767793921Z" level=info msg="CreateContainer within sandbox \"5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:10:59.801586 containerd[1464]: time="2025-09-13T00:10:59.801421510Z" level=info msg="CreateContainer within sandbox \"5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"987e38f737e0a75d3d6ac04b26b22a4a6dfccec40847c76aa458ba37c0ae9879\"" Sep 13 00:10:59.803306 containerd[1464]: time="2025-09-13T00:10:59.802629504Z" level=info msg="StartContainer for \"987e38f737e0a75d3d6ac04b26b22a4a6dfccec40847c76aa458ba37c0ae9879\"" Sep 13 00:10:59.896518 systemd[1]: Started cri-containerd-987e38f737e0a75d3d6ac04b26b22a4a6dfccec40847c76aa458ba37c0ae9879.scope - libcontainer container 987e38f737e0a75d3d6ac04b26b22a4a6dfccec40847c76aa458ba37c0ae9879. Sep 13 00:10:59.963083 containerd[1464]: time="2025-09-13T00:10:59.962892769Z" level=info msg="StartContainer for \"987e38f737e0a75d3d6ac04b26b22a4a6dfccec40847c76aa458ba37c0ae9879\" returns successfully" Sep 13 00:11:00.878492 kubelet[2572]: I0913 00:11:00.878187 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b44c8449-j7495" podStartSLOduration=25.740313198 podStartE2EDuration="31.878158908s" podCreationTimestamp="2025-09-13 00:10:29 +0000 UTC" firstStartedPulling="2025-09-13 00:10:53.610207986 +0000 UTC m=+45.484224943" lastFinishedPulling="2025-09-13 00:10:59.748053699 +0000 UTC m=+51.622070653" observedRunningTime="2025-09-13 00:11:00.872333863 +0000 UTC m=+52.746350821" watchObservedRunningTime="2025-09-13 00:11:00.878158908 +0000 UTC m=+52.752175870" Sep 13 00:11:01.935873 ntpd[1427]: Listen normally on 8 vxlan.calico 192.168.61.64:123 Sep 13 00:11:01.936534 ntpd[1427]: 13 Sep 00:11:01 ntpd[1427]: Listen normally on 8 vxlan.calico 192.168.61.64:123 Sep 13 00:11:01.936629 ntpd[1427]: Listen normally on 9 calia030d6f2f61 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 13 00:11:01.936827 ntpd[1427]: 13 Sep 00:11:01 ntpd[1427]: Listen normally on 9 calia030d6f2f61 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 13 00:11:01.936992 ntpd[1427]: Listen normally on 10 vxlan.calico [fe80::6424:beff:fe10:b66d%5]:123 Sep 13 00:11:01.937469 ntpd[1427]: 13 Sep 00:11:01 ntpd[1427]: Listen normally on 10 vxlan.calico [fe80::6424:beff:fe10:b66d%5]:123 Sep 13 00:11:01.937469 ntpd[1427]: 13 Sep 00:11:01 ntpd[1427]: Listen normally on 11 cali5543cbc496a [fe80::ecee:eeff:feee:eeee%8]:123 Sep 13 00:11:01.937469 ntpd[1427]: 13 Sep 00:11:01 ntpd[1427]: Listen normally on 12 calia95e263afa7 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 13 00:11:01.937469 ntpd[1427]: 13 Sep 00:11:01 ntpd[1427]: Listen normally on 13 cali414600e7f9e [fe80::ecee:eeff:feee:eeee%10]:123 Sep 13 00:11:01.937071 ntpd[1427]: Listen normally on 11 cali5543cbc496a [fe80::ecee:eeff:feee:eeee%8]:123 Sep 13 00:11:01.937792 ntpd[1427]: 13 Sep 00:11:01 ntpd[1427]: Listen normally on 14 cali0b443dab7b4 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 13 00:11:01.937792 ntpd[1427]: 13 Sep 00:11:01 ntpd[1427]: Listen normally on 15 cali0cc4af40c8e [fe80::ecee:eeff:feee:eeee%12]:123 Sep 13 00:11:01.937129 ntpd[1427]: Listen normally on 12 calia95e263afa7 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 13 00:11:01.937965 ntpd[1427]: 13 Sep 00:11:01 ntpd[1427]: Listen normally on 16 cali8c727f2642c [fe80::ecee:eeff:feee:eeee%13]:123 Sep 13 00:11:01.937965 ntpd[1427]: 13 Sep 00:11:01 ntpd[1427]: Listen normally on 17 cali5d4006ff1a6 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 13 00:11:01.937965 ntpd[1427]: 13 Sep 00:11:01 ntpd[1427]: Listen normally on 18 calid877255ac7b [fe80::ecee:eeff:feee:eeee%15]:123 Sep 13 00:11:01.937186 ntpd[1427]: Listen normally on 13 cali414600e7f9e [fe80::ecee:eeff:feee:eeee%10]:123 Sep 13 00:11:01.937667 ntpd[1427]: Listen normally on 14 cali0b443dab7b4 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 13 00:11:01.937747 ntpd[1427]: Listen normally on 15 cali0cc4af40c8e [fe80::ecee:eeff:feee:eeee%12]:123 Sep 13 00:11:01.937819 ntpd[1427]: Listen normally on 16 cali8c727f2642c [fe80::ecee:eeff:feee:eeee%13]:123 Sep 13 00:11:01.937888 ntpd[1427]: Listen normally on 17 cali5d4006ff1a6 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 13 00:11:01.937947 ntpd[1427]: Listen normally on 18 calid877255ac7b [fe80::ecee:eeff:feee:eeee%15]:123 Sep 13 00:11:03.269898 containerd[1464]: time="2025-09-13T00:11:03.269800394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:03.272086 containerd[1464]: time="2025-09-13T00:11:03.271746178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:11:03.275281 containerd[1464]: time="2025-09-13T00:11:03.274103626Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:03.278763 containerd[1464]: time="2025-09-13T00:11:03.278692443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:03.281285 containerd[1464]: time="2025-09-13T00:11:03.281198786Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.532420236s" Sep 13 00:11:03.281636 containerd[1464]: time="2025-09-13T00:11:03.281574611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:11:03.285520 containerd[1464]: time="2025-09-13T00:11:03.285474322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:11:03.287340 containerd[1464]: time="2025-09-13T00:11:03.287289009Z" level=info msg="CreateContainer within sandbox \"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:11:03.311053 containerd[1464]: time="2025-09-13T00:11:03.309776656Z" level=info msg="CreateContainer within sandbox \"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb\"" Sep 13 00:11:03.313642 containerd[1464]: time="2025-09-13T00:11:03.313260114Z" level=info msg="StartContainer for \"91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb\"" Sep 13 00:11:03.385489 systemd[1]: run-containerd-runc-k8s.io-91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb-runc.V8O32I.mount: Deactivated successfully. Sep 13 00:11:03.395576 systemd[1]: Started cri-containerd-91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb.scope - libcontainer container 91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb. Sep 13 00:11:03.459532 containerd[1464]: time="2025-09-13T00:11:03.459470001Z" level=info msg="StartContainer for \"91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb\" returns successfully" Sep 13 00:11:03.878887 kubelet[2572]: I0913 00:11:03.878778 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-858b45798c-wf79q" podStartSLOduration=30.251490769 podStartE2EDuration="39.878626637s" podCreationTimestamp="2025-09-13 00:10:24 +0000 UTC" firstStartedPulling="2025-09-13 00:10:53.655727548 +0000 UTC m=+45.529744500" lastFinishedPulling="2025-09-13 00:11:03.28286341 +0000 UTC m=+55.156880368" observedRunningTime="2025-09-13 00:11:03.877627556 +0000 UTC m=+55.751644519" watchObservedRunningTime="2025-09-13 00:11:03.878626637 +0000 UTC m=+55.752643600" Sep 13 00:11:05.867253 kubelet[2572]: I0913 00:11:05.867013 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:11:06.215795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1880090175.mount: Deactivated successfully. Sep 13 00:11:07.386245 containerd[1464]: time="2025-09-13T00:11:07.384770025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:07.388219 containerd[1464]: time="2025-09-13T00:11:07.388152930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:11:07.390066 containerd[1464]: time="2025-09-13T00:11:07.390028838Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:07.395333 containerd[1464]: time="2025-09-13T00:11:07.395267693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:07.399646 containerd[1464]: time="2025-09-13T00:11:07.399561495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.113816097s" Sep 13 00:11:07.400268 containerd[1464]: time="2025-09-13T00:11:07.400024655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:11:07.404368 containerd[1464]: time="2025-09-13T00:11:07.404279032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:11:07.408964 containerd[1464]: time="2025-09-13T00:11:07.408861349Z" level=info msg="CreateContainer within sandbox \"333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:11:07.444256 containerd[1464]: time="2025-09-13T00:11:07.442812761Z" level=info msg="CreateContainer within sandbox \"333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"dc0528f05d82d74e6a0d3d454e580abd7ef90454659ff26ffadf01922a9f008d\"" Sep 13 00:11:07.447182 containerd[1464]: time="2025-09-13T00:11:07.446769340Z" level=info msg="StartContainer for \"dc0528f05d82d74e6a0d3d454e580abd7ef90454659ff26ffadf01922a9f008d\"" Sep 13 00:11:07.515523 systemd[1]: Started cri-containerd-dc0528f05d82d74e6a0d3d454e580abd7ef90454659ff26ffadf01922a9f008d.scope - libcontainer container dc0528f05d82d74e6a0d3d454e580abd7ef90454659ff26ffadf01922a9f008d. Sep 13 00:11:07.579588 containerd[1464]: time="2025-09-13T00:11:07.579534894Z" level=info msg="StartContainer for \"dc0528f05d82d74e6a0d3d454e580abd7ef90454659ff26ffadf01922a9f008d\" returns successfully" Sep 13 00:11:07.897574 kubelet[2572]: I0913 00:11:07.896792 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-xlg2k" podStartSLOduration=27.044988254 podStartE2EDuration="39.896760738s" podCreationTimestamp="2025-09-13 00:10:28 +0000 UTC" firstStartedPulling="2025-09-13 00:10:54.55115707 +0000 UTC m=+46.425174013" lastFinishedPulling="2025-09-13 00:11:07.40292954 +0000 UTC m=+59.276946497" observedRunningTime="2025-09-13 00:11:07.893710524 +0000 UTC m=+59.767727486" watchObservedRunningTime="2025-09-13 00:11:07.896760738 +0000 UTC m=+59.770777712" Sep 13 00:11:08.293755 containerd[1464]: time="2025-09-13T00:11:08.293691087Z" level=info msg="StopPodSandbox for \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\"" Sep 13 00:11:08.474711 containerd[1464]: 2025-09-13 00:11:08.379 [WARNING][5294] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706", Pod:"coredns-668d6bf9bc-6sxxd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0cc4af40c8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:08.474711 containerd[1464]: 2025-09-13 00:11:08.380 [INFO][5294] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Sep 13 00:11:08.474711 containerd[1464]: 2025-09-13 00:11:08.380 [INFO][5294] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" iface="eth0" netns="" Sep 13 00:11:08.474711 containerd[1464]: 2025-09-13 00:11:08.380 [INFO][5294] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Sep 13 00:11:08.474711 containerd[1464]: 2025-09-13 00:11:08.380 [INFO][5294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Sep 13 00:11:08.474711 containerd[1464]: 2025-09-13 00:11:08.450 [INFO][5303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" HandleID="k8s-pod-network.175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:11:08.474711 containerd[1464]: 2025-09-13 00:11:08.451 [INFO][5303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:08.474711 containerd[1464]: 2025-09-13 00:11:08.451 [INFO][5303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:08.474711 containerd[1464]: 2025-09-13 00:11:08.461 [WARNING][5303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" HandleID="k8s-pod-network.175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:11:08.474711 containerd[1464]: 2025-09-13 00:11:08.461 [INFO][5303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" HandleID="k8s-pod-network.175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:11:08.474711 containerd[1464]: 2025-09-13 00:11:08.465 [INFO][5303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:08.474711 containerd[1464]: 2025-09-13 00:11:08.469 [INFO][5294] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Sep 13 00:11:08.474711 containerd[1464]: time="2025-09-13T00:11:08.474705156Z" level=info msg="TearDown network for sandbox \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\" successfully" Sep 13 00:11:08.475826 containerd[1464]: time="2025-09-13T00:11:08.474742159Z" level=info msg="StopPodSandbox for \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\" returns successfully" Sep 13 00:11:08.476969 containerd[1464]: time="2025-09-13T00:11:08.476638910Z" level=info msg="RemovePodSandbox for \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\"" Sep 13 00:11:08.476969 containerd[1464]: time="2025-09-13T00:11:08.476690646Z" level=info msg="Forcibly stopping sandbox \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\"" Sep 13 00:11:08.652057 containerd[1464]: 2025-09-13 00:11:08.579 [WARNING][5318] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6cd4a1a8-fc8a-4205-a62b-c1d1dd0abb8f", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"8f2a0a84ef1cf4a798b3e32978d514c6aac81f459888930fc6314bdf6a54e706", Pod:"coredns-668d6bf9bc-6sxxd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0cc4af40c8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:08.652057 containerd[1464]: 2025-09-13 00:11:08.580 [INFO][5318] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Sep 13 00:11:08.652057 containerd[1464]: 2025-09-13 00:11:08.580 [INFO][5318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" iface="eth0" netns="" Sep 13 00:11:08.652057 containerd[1464]: 2025-09-13 00:11:08.580 [INFO][5318] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Sep 13 00:11:08.652057 containerd[1464]: 2025-09-13 00:11:08.580 [INFO][5318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Sep 13 00:11:08.652057 containerd[1464]: 2025-09-13 00:11:08.628 [INFO][5326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" HandleID="k8s-pod-network.175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:11:08.652057 containerd[1464]: 2025-09-13 00:11:08.630 [INFO][5326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:08.652057 containerd[1464]: 2025-09-13 00:11:08.630 [INFO][5326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:08.652057 containerd[1464]: 2025-09-13 00:11:08.644 [WARNING][5326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" HandleID="k8s-pod-network.175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:11:08.652057 containerd[1464]: 2025-09-13 00:11:08.644 [INFO][5326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" HandleID="k8s-pod-network.175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--6sxxd-eth0" Sep 13 00:11:08.652057 containerd[1464]: 2025-09-13 00:11:08.647 [INFO][5326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:08.652057 containerd[1464]: 2025-09-13 00:11:08.649 [INFO][5318] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505" Sep 13 00:11:08.652057 containerd[1464]: time="2025-09-13T00:11:08.651668092Z" level=info msg="TearDown network for sandbox \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\" successfully" Sep 13 00:11:08.661264 containerd[1464]: time="2025-09-13T00:11:08.660947482Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:08.661264 containerd[1464]: time="2025-09-13T00:11:08.661060956Z" level=info msg="RemovePodSandbox \"175e2219d8b652ea144a503f6e2b1af62ed99f6e98d8e76188953f8af1ea7505\" returns successfully" Sep 13 00:11:08.661982 containerd[1464]: time="2025-09-13T00:11:08.661919881Z" level=info msg="StopPodSandbox for \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\"" Sep 13 00:11:08.667643 containerd[1464]: time="2025-09-13T00:11:08.667576246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:11:08.668795 containerd[1464]: time="2025-09-13T00:11:08.668754329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:08.672787 containerd[1464]: time="2025-09-13T00:11:08.672721495Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:08.673309 containerd[1464]: time="2025-09-13T00:11:08.673177604Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.268852974s" Sep 13 00:11:08.673431 containerd[1464]: time="2025-09-13T00:11:08.673321880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:11:08.673850 containerd[1464]: time="2025-09-13T00:11:08.673794823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:08.682455 containerd[1464]: time="2025-09-13T00:11:08.679787867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:11:08.682694 containerd[1464]: time="2025-09-13T00:11:08.682651251Z" level=info msg="CreateContainer within sandbox \"4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:11:08.728754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1955629244.mount: Deactivated successfully. Sep 13 00:11:08.732323 containerd[1464]: time="2025-09-13T00:11:08.732272466Z" level=info msg="CreateContainer within sandbox \"4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ad991428c6407941773f7cca034d1f430d1741122d47887a390ac10ac7fdcc7b\"" Sep 13 00:11:08.734362 containerd[1464]: time="2025-09-13T00:11:08.734296218Z" level=info msg="StartContainer for \"ad991428c6407941773f7cca034d1f430d1741122d47887a390ac10ac7fdcc7b\"" Sep 13 00:11:08.834829 systemd[1]: run-containerd-runc-k8s.io-ad991428c6407941773f7cca034d1f430d1741122d47887a390ac10ac7fdcc7b-runc.dOvE6C.mount: Deactivated successfully. Sep 13 00:11:08.847273 systemd[1]: Started cri-containerd-ad991428c6407941773f7cca034d1f430d1741122d47887a390ac10ac7fdcc7b.scope - libcontainer container ad991428c6407941773f7cca034d1f430d1741122d47887a390ac10ac7fdcc7b. Sep 13 00:11:08.888983 containerd[1464]: 2025-09-13 00:11:08.766 [WARNING][5340] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--5df6d656dc--wgt8k-eth0" Sep 13 00:11:08.888983 containerd[1464]: 2025-09-13 00:11:08.766 [INFO][5340] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Sep 13 00:11:08.888983 containerd[1464]: 2025-09-13 00:11:08.766 [INFO][5340] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" iface="eth0" netns="" Sep 13 00:11:08.888983 containerd[1464]: 2025-09-13 00:11:08.767 [INFO][5340] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Sep 13 00:11:08.888983 containerd[1464]: 2025-09-13 00:11:08.767 [INFO][5340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Sep 13 00:11:08.888983 containerd[1464]: 2025-09-13 00:11:08.856 [INFO][5356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" HandleID="k8s-pod-network.c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--5df6d656dc--wgt8k-eth0" Sep 13 00:11:08.888983 containerd[1464]: 2025-09-13 00:11:08.858 [INFO][5356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:08.888983 containerd[1464]: 2025-09-13 00:11:08.858 [INFO][5356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:08.888983 containerd[1464]: 2025-09-13 00:11:08.874 [WARNING][5356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" HandleID="k8s-pod-network.c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--5df6d656dc--wgt8k-eth0" Sep 13 00:11:08.888983 containerd[1464]: 2025-09-13 00:11:08.874 [INFO][5356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" HandleID="k8s-pod-network.c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--5df6d656dc--wgt8k-eth0" Sep 13 00:11:08.888983 containerd[1464]: 2025-09-13 00:11:08.877 [INFO][5356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:08.888983 containerd[1464]: 2025-09-13 00:11:08.882 [INFO][5340] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Sep 13 00:11:08.888983 containerd[1464]: time="2025-09-13T00:11:08.888703281Z" level=info msg="TearDown network for sandbox \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\" successfully" Sep 13 00:11:08.888983 containerd[1464]: time="2025-09-13T00:11:08.888799081Z" level=info msg="StopPodSandbox for \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\" returns successfully" Sep 13 00:11:08.891917 containerd[1464]: time="2025-09-13T00:11:08.891522131Z" level=info msg="RemovePodSandbox for \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\"" Sep 13 00:11:08.891917 containerd[1464]: time="2025-09-13T00:11:08.891564801Z" level=info msg="Forcibly stopping sandbox \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\"" Sep 13 00:11:08.915759 containerd[1464]: time="2025-09-13T00:11:08.915317896Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:08.920883 containerd[1464]: time="2025-09-13T00:11:08.920546813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:11:08.940040 containerd[1464]: time="2025-09-13T00:11:08.939844878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 259.890154ms" Sep 13 00:11:08.940040 containerd[1464]: time="2025-09-13T00:11:08.939905634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:11:08.950776 containerd[1464]: time="2025-09-13T00:11:08.950711328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:11:08.962430 containerd[1464]: time="2025-09-13T00:11:08.961719828Z" level=info msg="CreateContainer within sandbox \"3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:11:08.983944 containerd[1464]: time="2025-09-13T00:11:08.983889153Z" level=info msg="StartContainer for \"ad991428c6407941773f7cca034d1f430d1741122d47887a390ac10ac7fdcc7b\" returns successfully" Sep 13 00:11:09.040272 containerd[1464]: time="2025-09-13T00:11:09.039977466Z" level=info msg="CreateContainer within sandbox \"3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"07c728c8b2c22ced1718183459f41af32bd4106328b618d18587aea03cb76825\"" Sep 13 00:11:09.042269 containerd[1464]: time="2025-09-13T00:11:09.041477391Z" level=info msg="StartContainer for \"07c728c8b2c22ced1718183459f41af32bd4106328b618d18587aea03cb76825\"" Sep 13 00:11:09.144520 systemd[1]: Started cri-containerd-07c728c8b2c22ced1718183459f41af32bd4106328b618d18587aea03cb76825.scope - libcontainer container 07c728c8b2c22ced1718183459f41af32bd4106328b618d18587aea03cb76825. Sep 13 00:11:09.159045 containerd[1464]: 2025-09-13 00:11:09.068 [WARNING][5387] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--5df6d656dc--wgt8k-eth0" Sep 13 00:11:09.159045 containerd[1464]: 2025-09-13 00:11:09.068 [INFO][5387] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Sep 13 00:11:09.159045 containerd[1464]: 2025-09-13 00:11:09.068 [INFO][5387] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" iface="eth0" netns="" Sep 13 00:11:09.159045 containerd[1464]: 2025-09-13 00:11:09.068 [INFO][5387] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Sep 13 00:11:09.159045 containerd[1464]: 2025-09-13 00:11:09.068 [INFO][5387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Sep 13 00:11:09.159045 containerd[1464]: 2025-09-13 00:11:09.133 [INFO][5428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" HandleID="k8s-pod-network.c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--5df6d656dc--wgt8k-eth0" Sep 13 00:11:09.159045 containerd[1464]: 2025-09-13 00:11:09.134 [INFO][5428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:09.159045 containerd[1464]: 2025-09-13 00:11:09.134 [INFO][5428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:09.159045 containerd[1464]: 2025-09-13 00:11:09.150 [WARNING][5428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" HandleID="k8s-pod-network.c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--5df6d656dc--wgt8k-eth0" Sep 13 00:11:09.159045 containerd[1464]: 2025-09-13 00:11:09.150 [INFO][5428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" HandleID="k8s-pod-network.c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-whisker--5df6d656dc--wgt8k-eth0" Sep 13 00:11:09.159045 containerd[1464]: 2025-09-13 00:11:09.153 [INFO][5428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:09.159045 containerd[1464]: 2025-09-13 00:11:09.156 [INFO][5387] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac" Sep 13 00:11:09.159863 containerd[1464]: time="2025-09-13T00:11:09.159113275Z" level=info msg="TearDown network for sandbox \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\" successfully" Sep 13 00:11:09.167667 containerd[1464]: time="2025-09-13T00:11:09.167470139Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:09.167667 containerd[1464]: time="2025-09-13T00:11:09.167590216Z" level=info msg="RemovePodSandbox \"c0d6b3aca1d6bb44338cbf45cd23a77400e20c41069f2f0b81ad1b31241249ac\" returns successfully" Sep 13 00:11:09.170757 containerd[1464]: time="2025-09-13T00:11:09.170704592Z" level=info msg="StopPodSandbox for \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\"" Sep 13 00:11:09.201838 containerd[1464]: time="2025-09-13T00:11:09.201003343Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:09.203847 containerd[1464]: time="2025-09-13T00:11:09.203778329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:11:09.207942 containerd[1464]: time="2025-09-13T00:11:09.207835202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 250.769961ms" Sep 13 00:11:09.207942 containerd[1464]: time="2025-09-13T00:11:09.207914874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:11:09.210607 containerd[1464]: time="2025-09-13T00:11:09.210451662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:11:09.212649 containerd[1464]: time="2025-09-13T00:11:09.212596601Z" level=info msg="CreateContainer within sandbox \"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:11:09.271205 containerd[1464]: time="2025-09-13T00:11:09.270982570Z" level=info msg="CreateContainer within sandbox \"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2\"" Sep 13 00:11:09.277171 containerd[1464]: time="2025-09-13T00:11:09.274477480Z" level=info msg="StartContainer for \"d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2\"" Sep 13 00:11:09.349117 systemd[1]: Started cri-containerd-d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2.scope - libcontainer container d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2. Sep 13 00:11:09.383742 containerd[1464]: time="2025-09-13T00:11:09.383685943Z" level=info msg="StartContainer for \"07c728c8b2c22ced1718183459f41af32bd4106328b618d18587aea03cb76825\" returns successfully" Sep 13 00:11:09.432811 containerd[1464]: 2025-09-13 00:11:09.308 [WARNING][5466] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0", GenerateName:"calico-kube-controllers-5b44c8449-", Namespace:"calico-system", SelfLink:"", UID:"cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b44c8449", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf", Pod:"calico-kube-controllers-5b44c8449-j7495", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia95e263afa7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:09.432811 containerd[1464]: 2025-09-13 00:11:09.308 [INFO][5466] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Sep 13 00:11:09.432811 containerd[1464]: 2025-09-13 00:11:09.308 [INFO][5466] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" iface="eth0" netns="" Sep 13 00:11:09.432811 containerd[1464]: 2025-09-13 00:11:09.308 [INFO][5466] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Sep 13 00:11:09.432811 containerd[1464]: 2025-09-13 00:11:09.308 [INFO][5466] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Sep 13 00:11:09.432811 containerd[1464]: 2025-09-13 00:11:09.397 [INFO][5483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" HandleID="k8s-pod-network.05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:11:09.432811 containerd[1464]: 2025-09-13 00:11:09.398 [INFO][5483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:09.432811 containerd[1464]: 2025-09-13 00:11:09.399 [INFO][5483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:09.432811 containerd[1464]: 2025-09-13 00:11:09.419 [WARNING][5483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" HandleID="k8s-pod-network.05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:11:09.432811 containerd[1464]: 2025-09-13 00:11:09.419 [INFO][5483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" HandleID="k8s-pod-network.05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:11:09.432811 containerd[1464]: 2025-09-13 00:11:09.425 [INFO][5483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:09.432811 containerd[1464]: 2025-09-13 00:11:09.429 [INFO][5466] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Sep 13 00:11:09.432811 containerd[1464]: time="2025-09-13T00:11:09.432740199Z" level=info msg="TearDown network for sandbox \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\" successfully" Sep 13 00:11:09.432811 containerd[1464]: time="2025-09-13T00:11:09.432780195Z" level=info msg="StopPodSandbox for \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\" returns successfully" Sep 13 00:11:09.435443 containerd[1464]: time="2025-09-13T00:11:09.435398557Z" level=info msg="RemovePodSandbox for \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\"" Sep 13 00:11:09.435589 containerd[1464]: time="2025-09-13T00:11:09.435449888Z" level=info msg="Forcibly stopping sandbox \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\"" Sep 13 00:11:09.546090 containerd[1464]: time="2025-09-13T00:11:09.546029663Z" level=info msg="StartContainer for \"d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2\" returns successfully" Sep 13 00:11:09.616437 containerd[1464]: 2025-09-13 00:11:09.518 [WARNING][5524] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0", GenerateName:"calico-kube-controllers-5b44c8449-", Namespace:"calico-system", SelfLink:"", UID:"cdf14ae2-e5cc-4d6f-8ee1-298a0fa2d089", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b44c8449", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"5790d78e7fe050de934047167342388959e776bcea943fcc04bca842e12a18bf", Pod:"calico-kube-controllers-5b44c8449-j7495", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia95e263afa7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:09.616437 containerd[1464]: 2025-09-13 00:11:09.518 [INFO][5524] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Sep 13 00:11:09.616437 containerd[1464]: 2025-09-13 00:11:09.518 [INFO][5524] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" iface="eth0" netns="" Sep 13 00:11:09.616437 containerd[1464]: 2025-09-13 00:11:09.518 [INFO][5524] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Sep 13 00:11:09.616437 containerd[1464]: 2025-09-13 00:11:09.518 [INFO][5524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Sep 13 00:11:09.616437 containerd[1464]: 2025-09-13 00:11:09.588 [INFO][5533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" HandleID="k8s-pod-network.05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:11:09.616437 containerd[1464]: 2025-09-13 00:11:09.591 [INFO][5533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:09.616437 containerd[1464]: 2025-09-13 00:11:09.591 [INFO][5533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:09.616437 containerd[1464]: 2025-09-13 00:11:09.605 [WARNING][5533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" HandleID="k8s-pod-network.05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:11:09.616437 containerd[1464]: 2025-09-13 00:11:09.605 [INFO][5533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" HandleID="k8s-pod-network.05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--kube--controllers--5b44c8449--j7495-eth0" Sep 13 00:11:09.616437 containerd[1464]: 2025-09-13 00:11:09.609 [INFO][5533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:09.616437 containerd[1464]: 2025-09-13 00:11:09.611 [INFO][5524] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd" Sep 13 00:11:09.616437 containerd[1464]: time="2025-09-13T00:11:09.614343041Z" level=info msg="TearDown network for sandbox \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\" successfully" Sep 13 00:11:09.625135 containerd[1464]: time="2025-09-13T00:11:09.624703154Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:09.625441 containerd[1464]: time="2025-09-13T00:11:09.625409418Z" level=info msg="RemovePodSandbox \"05d5dc8f66970a0f367335009805182bdd502bd26f71c2ce355927c479359edd\" returns successfully" Sep 13 00:11:09.628132 containerd[1464]: time="2025-09-13T00:11:09.628054074Z" level=info msg="StopPodSandbox for \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\"" Sep 13 00:11:09.835384 containerd[1464]: 2025-09-13 00:11:09.742 [WARNING][5560] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"18c29eae-9c67-41af-9566-6e188885af9e", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f", Pod:"coredns-668d6bf9bc-7lkdb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5543cbc496a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:09.835384 containerd[1464]: 2025-09-13 00:11:09.744 [INFO][5560] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Sep 13 00:11:09.835384 containerd[1464]: 2025-09-13 00:11:09.744 [INFO][5560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" iface="eth0" netns="" Sep 13 00:11:09.835384 containerd[1464]: 2025-09-13 00:11:09.744 [INFO][5560] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Sep 13 00:11:09.835384 containerd[1464]: 2025-09-13 00:11:09.744 [INFO][5560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Sep 13 00:11:09.835384 containerd[1464]: 2025-09-13 00:11:09.802 [INFO][5572] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" HandleID="k8s-pod-network.333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:11:09.835384 containerd[1464]: 2025-09-13 00:11:09.803 [INFO][5572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:09.835384 containerd[1464]: 2025-09-13 00:11:09.803 [INFO][5572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:09.835384 containerd[1464]: 2025-09-13 00:11:09.824 [WARNING][5572] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" HandleID="k8s-pod-network.333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:11:09.835384 containerd[1464]: 2025-09-13 00:11:09.824 [INFO][5572] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" HandleID="k8s-pod-network.333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:11:09.835384 containerd[1464]: 2025-09-13 00:11:09.826 [INFO][5572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:09.835384 containerd[1464]: 2025-09-13 00:11:09.831 [INFO][5560] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Sep 13 00:11:09.836466 containerd[1464]: time="2025-09-13T00:11:09.835438612Z" level=info msg="TearDown network for sandbox \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\" successfully" Sep 13 00:11:09.836466 containerd[1464]: time="2025-09-13T00:11:09.835472983Z" level=info msg="StopPodSandbox for \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\" returns successfully" Sep 13 00:11:09.836466 containerd[1464]: time="2025-09-13T00:11:09.836191852Z" level=info msg="RemovePodSandbox for \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\"" Sep 13 00:11:09.836466 containerd[1464]: time="2025-09-13T00:11:09.836271124Z" level=info msg="Forcibly stopping sandbox \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\"" Sep 13 00:11:09.968382 kubelet[2572]: I0913 00:11:09.968028 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77b67c64b5-zwfwq" podStartSLOduration=32.680884011 podStartE2EDuration="44.967998711s" podCreationTimestamp="2025-09-13 00:10:25 +0000 UTC" firstStartedPulling="2025-09-13 00:10:56.655982794 +0000 UTC m=+48.529999745" lastFinishedPulling="2025-09-13 00:11:08.943097506 +0000 UTC m=+60.817114445" observedRunningTime="2025-09-13 00:11:09.935165925 +0000 UTC m=+61.809182890" watchObservedRunningTime="2025-09-13 00:11:09.967998711 +0000 UTC m=+61.842015672" Sep 13 00:11:10.062266 containerd[1464]: 2025-09-13 00:11:09.968 [WARNING][5586] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"18c29eae-9c67-41af-9566-6e188885af9e", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"3c8263a96f3630e5bd11a79b762fcc7b037d4283d2463afc0b9e5ba0cbe97f9f", Pod:"coredns-668d6bf9bc-7lkdb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5543cbc496a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:10.062266 containerd[1464]: 2025-09-13 00:11:09.969 [INFO][5586] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Sep 13 00:11:10.062266 containerd[1464]: 2025-09-13 00:11:09.969 [INFO][5586] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" iface="eth0" netns="" Sep 13 00:11:10.062266 containerd[1464]: 2025-09-13 00:11:09.969 [INFO][5586] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Sep 13 00:11:10.062266 containerd[1464]: 2025-09-13 00:11:09.969 [INFO][5586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Sep 13 00:11:10.062266 containerd[1464]: 2025-09-13 00:11:10.037 [INFO][5594] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" HandleID="k8s-pod-network.333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:11:10.062266 containerd[1464]: 2025-09-13 00:11:10.038 [INFO][5594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:10.062266 containerd[1464]: 2025-09-13 00:11:10.038 [INFO][5594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:10.062266 containerd[1464]: 2025-09-13 00:11:10.053 [WARNING][5594] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" HandleID="k8s-pod-network.333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:11:10.062266 containerd[1464]: 2025-09-13 00:11:10.053 [INFO][5594] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" HandleID="k8s-pod-network.333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-coredns--668d6bf9bc--7lkdb-eth0" Sep 13 00:11:10.062266 containerd[1464]: 2025-09-13 00:11:10.056 [INFO][5594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:10.062266 containerd[1464]: 2025-09-13 00:11:10.058 [INFO][5586] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c" Sep 13 00:11:10.065011 containerd[1464]: time="2025-09-13T00:11:10.063254592Z" level=info msg="TearDown network for sandbox \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\" successfully" Sep 13 00:11:10.070584 containerd[1464]: time="2025-09-13T00:11:10.070509786Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:10.070812 containerd[1464]: time="2025-09-13T00:11:10.070783236Z" level=info msg="RemovePodSandbox \"333e1db0f6cdb66ff4f679bf5ca74d09ed0a1b8f949b0619dc0b5fa5de511c3c\" returns successfully" Sep 13 00:11:10.072533 containerd[1464]: time="2025-09-13T00:11:10.072496642Z" level=info msg="StopPodSandbox for \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\"" Sep 13 00:11:10.245383 containerd[1464]: 2025-09-13 00:11:10.144 [WARNING][5610] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0", GenerateName:"calico-apiserver-858b45798c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a122eeee-1c48-4d4d-8b11-6b176f51757c", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858b45798c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b", Pod:"calico-apiserver-858b45798c-wf79q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali414600e7f9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:10.245383 containerd[1464]: 2025-09-13 00:11:10.146 [INFO][5610] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Sep 13 00:11:10.245383 containerd[1464]: 2025-09-13 00:11:10.146 [INFO][5610] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" iface="eth0" netns="" Sep 13 00:11:10.245383 containerd[1464]: 2025-09-13 00:11:10.146 [INFO][5610] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Sep 13 00:11:10.245383 containerd[1464]: 2025-09-13 00:11:10.146 [INFO][5610] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Sep 13 00:11:10.245383 containerd[1464]: 2025-09-13 00:11:10.210 [INFO][5618] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" HandleID="k8s-pod-network.0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:11:10.245383 containerd[1464]: 2025-09-13 00:11:10.210 [INFO][5618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:10.245383 containerd[1464]: 2025-09-13 00:11:10.211 [INFO][5618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:10.245383 containerd[1464]: 2025-09-13 00:11:10.227 [WARNING][5618] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" HandleID="k8s-pod-network.0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:11:10.245383 containerd[1464]: 2025-09-13 00:11:10.227 [INFO][5618] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" HandleID="k8s-pod-network.0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:11:10.245383 containerd[1464]: 2025-09-13 00:11:10.235 [INFO][5618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:10.245383 containerd[1464]: 2025-09-13 00:11:10.238 [INFO][5610] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Sep 13 00:11:10.248603 containerd[1464]: time="2025-09-13T00:11:10.246315791Z" level=info msg="TearDown network for sandbox \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\" successfully" Sep 13 00:11:10.248603 containerd[1464]: time="2025-09-13T00:11:10.246353106Z" level=info msg="StopPodSandbox for \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\" returns successfully" Sep 13 00:11:10.249637 containerd[1464]: time="2025-09-13T00:11:10.249595383Z" level=info msg="RemovePodSandbox for \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\"" Sep 13 00:11:10.249755 containerd[1464]: time="2025-09-13T00:11:10.249646915Z" level=info msg="Forcibly stopping sandbox \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\"" Sep 13 00:11:10.523745 containerd[1464]: 2025-09-13 00:11:10.401 [WARNING][5632] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0", GenerateName:"calico-apiserver-858b45798c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a122eeee-1c48-4d4d-8b11-6b176f51757c", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858b45798c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b", Pod:"calico-apiserver-858b45798c-wf79q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali414600e7f9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:10.523745 containerd[1464]: 2025-09-13 00:11:10.403 [INFO][5632] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Sep 13 00:11:10.523745 containerd[1464]: 2025-09-13 00:11:10.403 [INFO][5632] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" iface="eth0" netns="" Sep 13 00:11:10.523745 containerd[1464]: 2025-09-13 00:11:10.403 [INFO][5632] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Sep 13 00:11:10.523745 containerd[1464]: 2025-09-13 00:11:10.403 [INFO][5632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Sep 13 00:11:10.523745 containerd[1464]: 2025-09-13 00:11:10.484 [INFO][5639] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" HandleID="k8s-pod-network.0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:11:10.523745 containerd[1464]: 2025-09-13 00:11:10.484 [INFO][5639] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:10.523745 containerd[1464]: 2025-09-13 00:11:10.485 [INFO][5639] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:10.523745 containerd[1464]: 2025-09-13 00:11:10.511 [WARNING][5639] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" HandleID="k8s-pod-network.0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:11:10.523745 containerd[1464]: 2025-09-13 00:11:10.511 [INFO][5639] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" HandleID="k8s-pod-network.0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:11:10.523745 containerd[1464]: 2025-09-13 00:11:10.514 [INFO][5639] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:10.523745 containerd[1464]: 2025-09-13 00:11:10.517 [INFO][5632] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f" Sep 13 00:11:10.523745 containerd[1464]: time="2025-09-13T00:11:10.523727422Z" level=info msg="TearDown network for sandbox \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\" successfully" Sep 13 00:11:10.540359 containerd[1464]: time="2025-09-13T00:11:10.540036200Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:10.540359 containerd[1464]: time="2025-09-13T00:11:10.540141565Z" level=info msg="RemovePodSandbox \"0741213c9e34d7d90eea94b424b4ce3163dc3eb27ad5e7387199c1fff71ba77f\" returns successfully" Sep 13 00:11:10.541130 containerd[1464]: time="2025-09-13T00:11:10.541023711Z" level=info msg="StopPodSandbox for \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\"" Sep 13 00:11:10.839223 containerd[1464]: 2025-09-13 00:11:10.665 [WARNING][5657] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"7e85b06c-dd91-46b3-8259-324d7b213ab5", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de", Pod:"goldmane-54d579b49d-xlg2k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b443dab7b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:10.839223 containerd[1464]: 2025-09-13 00:11:10.666 [INFO][5657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Sep 13 00:11:10.839223 containerd[1464]: 2025-09-13 00:11:10.666 [INFO][5657] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" iface="eth0" netns="" Sep 13 00:11:10.839223 containerd[1464]: 2025-09-13 00:11:10.666 [INFO][5657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Sep 13 00:11:10.839223 containerd[1464]: 2025-09-13 00:11:10.666 [INFO][5657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Sep 13 00:11:10.839223 containerd[1464]: 2025-09-13 00:11:10.786 [INFO][5665] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" HandleID="k8s-pod-network.0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:11:10.839223 containerd[1464]: 2025-09-13 00:11:10.794 [INFO][5665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:10.839223 containerd[1464]: 2025-09-13 00:11:10.794 [INFO][5665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:10.839223 containerd[1464]: 2025-09-13 00:11:10.815 [WARNING][5665] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" HandleID="k8s-pod-network.0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:11:10.839223 containerd[1464]: 2025-09-13 00:11:10.816 [INFO][5665] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" HandleID="k8s-pod-network.0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:11:10.839223 containerd[1464]: 2025-09-13 00:11:10.819 [INFO][5665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:10.839223 containerd[1464]: 2025-09-13 00:11:10.826 [INFO][5657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Sep 13 00:11:10.839223 containerd[1464]: time="2025-09-13T00:11:10.837882245Z" level=info msg="TearDown network for sandbox \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\" successfully" Sep 13 00:11:10.839223 containerd[1464]: time="2025-09-13T00:11:10.837924025Z" level=info msg="StopPodSandbox for \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\" returns successfully" Sep 13 00:11:10.841820 containerd[1464]: time="2025-09-13T00:11:10.839357920Z" level=info msg="RemovePodSandbox for \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\"" Sep 13 00:11:10.841820 containerd[1464]: time="2025-09-13T00:11:10.839511281Z" level=info msg="Forcibly stopping sandbox \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\"" Sep 13 00:11:10.961151 kubelet[2572]: I0913 00:11:10.960460 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:11:11.077652 containerd[1464]: 2025-09-13 00:11:10.969 [WARNING][5679] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"7e85b06c-dd91-46b3-8259-324d7b213ab5", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"333ec22c3c7e29c1ad46421fcf11f60ab12b519c6c3b3f878b6b13c27f7039de", Pod:"goldmane-54d579b49d-xlg2k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b443dab7b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:11.077652 containerd[1464]: 2025-09-13 00:11:10.970 [INFO][5679] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Sep 13 00:11:11.077652 containerd[1464]: 2025-09-13 00:11:10.970 [INFO][5679] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" iface="eth0" netns="" Sep 13 00:11:11.077652 containerd[1464]: 2025-09-13 00:11:10.971 [INFO][5679] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Sep 13 00:11:11.077652 containerd[1464]: 2025-09-13 00:11:10.971 [INFO][5679] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Sep 13 00:11:11.077652 containerd[1464]: 2025-09-13 00:11:11.038 [INFO][5687] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" HandleID="k8s-pod-network.0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:11:11.077652 containerd[1464]: 2025-09-13 00:11:11.039 [INFO][5687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:11.077652 containerd[1464]: 2025-09-13 00:11:11.040 [INFO][5687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:11.077652 containerd[1464]: 2025-09-13 00:11:11.065 [WARNING][5687] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" HandleID="k8s-pod-network.0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:11:11.077652 containerd[1464]: 2025-09-13 00:11:11.066 [INFO][5687] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" HandleID="k8s-pod-network.0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-goldmane--54d579b49d--xlg2k-eth0" Sep 13 00:11:11.077652 containerd[1464]: 2025-09-13 00:11:11.070 [INFO][5687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:11.077652 containerd[1464]: 2025-09-13 00:11:11.073 [INFO][5679] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf" Sep 13 00:11:11.079747 containerd[1464]: time="2025-09-13T00:11:11.077743190Z" level=info msg="TearDown network for sandbox \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\" successfully" Sep 13 00:11:11.086395 containerd[1464]: time="2025-09-13T00:11:11.085070833Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:11.086395 containerd[1464]: time="2025-09-13T00:11:11.085311798Z" level=info msg="RemovePodSandbox \"0f462c1ce76a34abfcfbda1d4946a1fc37ec7b93843b59404dd54eeee91e03bf\" returns successfully" Sep 13 00:11:11.086613 containerd[1464]: time="2025-09-13T00:11:11.086543902Z" level=info msg="StopPodSandbox for \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\"" Sep 13 00:11:11.410879 containerd[1464]: 2025-09-13 00:11:11.255 [WARNING][5701] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbe2014e-59d9-4e26-9bc7-323114f09c1f", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45", Pod:"csi-node-driver-st7d7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c727f2642c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:11.410879 containerd[1464]: 2025-09-13 00:11:11.259 [INFO][5701] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Sep 13 00:11:11.410879 containerd[1464]: 2025-09-13 00:11:11.259 [INFO][5701] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" iface="eth0" netns="" Sep 13 00:11:11.410879 containerd[1464]: 2025-09-13 00:11:11.259 [INFO][5701] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Sep 13 00:11:11.410879 containerd[1464]: 2025-09-13 00:11:11.259 [INFO][5701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Sep 13 00:11:11.410879 containerd[1464]: 2025-09-13 00:11:11.366 [INFO][5709] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" HandleID="k8s-pod-network.e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:11:11.410879 containerd[1464]: 2025-09-13 00:11:11.370 [INFO][5709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:11.410879 containerd[1464]: 2025-09-13 00:11:11.371 [INFO][5709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:11.410879 containerd[1464]: 2025-09-13 00:11:11.393 [WARNING][5709] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" HandleID="k8s-pod-network.e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:11:11.410879 containerd[1464]: 2025-09-13 00:11:11.393 [INFO][5709] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" HandleID="k8s-pod-network.e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:11:11.410879 containerd[1464]: 2025-09-13 00:11:11.398 [INFO][5709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:11.410879 containerd[1464]: 2025-09-13 00:11:11.403 [INFO][5701] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Sep 13 00:11:11.410879 containerd[1464]: time="2025-09-13T00:11:11.410699792Z" level=info msg="TearDown network for sandbox \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\" successfully" Sep 13 00:11:11.410879 containerd[1464]: time="2025-09-13T00:11:11.410734780Z" level=info msg="StopPodSandbox for \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\" returns successfully" Sep 13 00:11:11.414297 containerd[1464]: time="2025-09-13T00:11:11.413024500Z" level=info msg="RemovePodSandbox for \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\"" Sep 13 00:11:11.414297 containerd[1464]: time="2025-09-13T00:11:11.413078455Z" level=info msg="Forcibly stopping sandbox \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\"" Sep 13 00:11:11.463023 containerd[1464]: time="2025-09-13T00:11:11.462958809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:11.465448 containerd[1464]: time="2025-09-13T00:11:11.465361600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:11:11.469165 containerd[1464]: time="2025-09-13T00:11:11.469111671Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:11.482554 containerd[1464]: time="2025-09-13T00:11:11.482495679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:11.486641 containerd[1464]: time="2025-09-13T00:11:11.486580296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.275239954s" Sep 13 00:11:11.486641 containerd[1464]: time="2025-09-13T00:11:11.486644320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:11:11.494038 containerd[1464]: time="2025-09-13T00:11:11.493984946Z" level=info msg="CreateContainer within sandbox \"4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:11:11.522816 containerd[1464]: time="2025-09-13T00:11:11.522749495Z" level=info msg="CreateContainer within sandbox \"4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fb18535856742f036759a24f544eadb47df57177e47216cd6ca1cc7faae0cc6d\"" Sep 13 00:11:11.529194 containerd[1464]: time="2025-09-13T00:11:11.525726176Z" level=info msg="StartContainer for \"fb18535856742f036759a24f544eadb47df57177e47216cd6ca1cc7faae0cc6d\"" Sep 13 00:11:11.623549 systemd[1]: Started cri-containerd-fb18535856742f036759a24f544eadb47df57177e47216cd6ca1cc7faae0cc6d.scope - libcontainer container fb18535856742f036759a24f544eadb47df57177e47216cd6ca1cc7faae0cc6d. Sep 13 00:11:11.706367 containerd[1464]: 2025-09-13 00:11:11.559 [WARNING][5723] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbe2014e-59d9-4e26-9bc7-323114f09c1f", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"4d46b74de5fb62c2c37b94e40f595b334eb58016cb16599722b4f8a4729dac45", Pod:"csi-node-driver-st7d7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c727f2642c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:11.706367 containerd[1464]: 2025-09-13 00:11:11.562 [INFO][5723] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Sep 13 00:11:11.706367 containerd[1464]: 2025-09-13 00:11:11.562 [INFO][5723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" iface="eth0" netns="" Sep 13 00:11:11.706367 containerd[1464]: 2025-09-13 00:11:11.562 [INFO][5723] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Sep 13 00:11:11.706367 containerd[1464]: 2025-09-13 00:11:11.562 [INFO][5723] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Sep 13 00:11:11.706367 containerd[1464]: 2025-09-13 00:11:11.683 [INFO][5740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" HandleID="k8s-pod-network.e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:11:11.706367 containerd[1464]: 2025-09-13 00:11:11.685 [INFO][5740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:11.706367 containerd[1464]: 2025-09-13 00:11:11.685 [INFO][5740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:11.706367 containerd[1464]: 2025-09-13 00:11:11.696 [WARNING][5740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" HandleID="k8s-pod-network.e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:11:11.706367 containerd[1464]: 2025-09-13 00:11:11.697 [INFO][5740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" HandleID="k8s-pod-network.e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-csi--node--driver--st7d7-eth0" Sep 13 00:11:11.706367 containerd[1464]: 2025-09-13 00:11:11.699 [INFO][5740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:11.706367 containerd[1464]: 2025-09-13 00:11:11.702 [INFO][5723] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41" Sep 13 00:11:11.707342 containerd[1464]: time="2025-09-13T00:11:11.706433136Z" level=info msg="TearDown network for sandbox \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\" successfully" Sep 13 00:11:11.712708 containerd[1464]: time="2025-09-13T00:11:11.712624162Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:11.712899 containerd[1464]: time="2025-09-13T00:11:11.712755552Z" level=info msg="RemovePodSandbox \"e3c2eedd008feea4a8286182da7a133939b6595109057c5ff977a4f75e571b41\" returns successfully" Sep 13 00:11:11.714515 containerd[1464]: time="2025-09-13T00:11:11.714467042Z" level=info msg="StopPodSandbox for \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\"" Sep 13 00:11:11.818539 containerd[1464]: time="2025-09-13T00:11:11.818475080Z" level=info msg="StartContainer for \"fb18535856742f036759a24f544eadb47df57177e47216cd6ca1cc7faae0cc6d\" returns successfully" Sep 13 00:11:11.917553 containerd[1464]: 2025-09-13 00:11:11.839 [WARNING][5770] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0", GenerateName:"calico-apiserver-77b67c64b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b28393a-bc83-414b-b383-70eb716d64ee", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b67c64b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101", Pod:"calico-apiserver-77b67c64b5-zwfwq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d4006ff1a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:11.917553 containerd[1464]: 2025-09-13 00:11:11.839 [INFO][5770] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Sep 13 00:11:11.917553 containerd[1464]: 2025-09-13 00:11:11.840 [INFO][5770] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" iface="eth0" netns="" Sep 13 00:11:11.917553 containerd[1464]: 2025-09-13 00:11:11.840 [INFO][5770] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Sep 13 00:11:11.917553 containerd[1464]: 2025-09-13 00:11:11.840 [INFO][5770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Sep 13 00:11:11.917553 containerd[1464]: 2025-09-13 00:11:11.890 [INFO][5790] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" HandleID="k8s-pod-network.a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:11:11.917553 containerd[1464]: 2025-09-13 00:11:11.891 [INFO][5790] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:11.917553 containerd[1464]: 2025-09-13 00:11:11.891 [INFO][5790] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:11.917553 containerd[1464]: 2025-09-13 00:11:11.907 [WARNING][5790] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" HandleID="k8s-pod-network.a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:11:11.917553 containerd[1464]: 2025-09-13 00:11:11.907 [INFO][5790] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" HandleID="k8s-pod-network.a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:11:11.917553 containerd[1464]: 2025-09-13 00:11:11.908 [INFO][5790] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:11.917553 containerd[1464]: 2025-09-13 00:11:11.910 [INFO][5770] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Sep 13 00:11:11.920430 containerd[1464]: time="2025-09-13T00:11:11.917595886Z" level=info msg="TearDown network for sandbox \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\" successfully" Sep 13 00:11:11.920430 containerd[1464]: time="2025-09-13T00:11:11.917627617Z" level=info msg="StopPodSandbox for \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\" returns successfully" Sep 13 00:11:11.920430 containerd[1464]: time="2025-09-13T00:11:11.918663497Z" level=info msg="RemovePodSandbox for \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\"" Sep 13 00:11:11.920430 containerd[1464]: time="2025-09-13T00:11:11.918707581Z" level=info msg="Forcibly stopping sandbox \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\"" Sep 13 00:11:12.000738 kubelet[2572]: I0913 00:11:12.000332 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-858b45798c-lxwx2" podStartSLOduration=36.101557779 podStartE2EDuration="48.000303979s" podCreationTimestamp="2025-09-13 00:10:24 +0000 UTC" firstStartedPulling="2025-09-13 00:10:57.310329604 +0000 UTC m=+49.184346556" lastFinishedPulling="2025-09-13 00:11:09.209075801 +0000 UTC m=+61.083092756" observedRunningTime="2025-09-13 00:11:09.973305964 +0000 UTC m=+61.847322926" watchObservedRunningTime="2025-09-13 00:11:12.000303979 +0000 UTC m=+63.874320966" Sep 13 00:11:12.130400 containerd[1464]: 2025-09-13 00:11:12.041 [WARNING][5808] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0", GenerateName:"calico-apiserver-77b67c64b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b28393a-bc83-414b-b383-70eb716d64ee", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b67c64b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"3138f72103939de9a384089b9bc511a5d34aa434e364ad06281fe8163b5d4101", Pod:"calico-apiserver-77b67c64b5-zwfwq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d4006ff1a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:12.130400 containerd[1464]: 2025-09-13 00:11:12.045 [INFO][5808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Sep 13 00:11:12.130400 containerd[1464]: 2025-09-13 00:11:12.045 [INFO][5808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" iface="eth0" netns="" Sep 13 00:11:12.130400 containerd[1464]: 2025-09-13 00:11:12.048 [INFO][5808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Sep 13 00:11:12.130400 containerd[1464]: 2025-09-13 00:11:12.048 [INFO][5808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Sep 13 00:11:12.130400 containerd[1464]: 2025-09-13 00:11:12.109 [INFO][5819] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" HandleID="k8s-pod-network.a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:11:12.130400 containerd[1464]: 2025-09-13 00:11:12.110 [INFO][5819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:12.130400 containerd[1464]: 2025-09-13 00:11:12.110 [INFO][5819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:12.130400 containerd[1464]: 2025-09-13 00:11:12.123 [WARNING][5819] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" HandleID="k8s-pod-network.a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:11:12.130400 containerd[1464]: 2025-09-13 00:11:12.123 [INFO][5819] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" HandleID="k8s-pod-network.a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--zwfwq-eth0" Sep 13 00:11:12.130400 containerd[1464]: 2025-09-13 00:11:12.126 [INFO][5819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:12.130400 containerd[1464]: 2025-09-13 00:11:12.127 [INFO][5808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f" Sep 13 00:11:12.132010 containerd[1464]: time="2025-09-13T00:11:12.130399745Z" level=info msg="TearDown network for sandbox \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\" successfully" Sep 13 00:11:12.136634 containerd[1464]: time="2025-09-13T00:11:12.136532633Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:12.136806 containerd[1464]: time="2025-09-13T00:11:12.136690345Z" level=info msg="RemovePodSandbox \"a6d8f2fc1dfdbff4a699c48c8835c3a1e270529e68b7831f218076364955541f\" returns successfully" Sep 13 00:11:12.137628 containerd[1464]: time="2025-09-13T00:11:12.137592491Z" level=info msg="StopPodSandbox for \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\"" Sep 13 00:11:12.266195 containerd[1464]: 2025-09-13 00:11:12.196 [WARNING][5833] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0", GenerateName:"calico-apiserver-858b45798c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bc4ce4d-5c95-4963-89e0-e92aa88cde6e", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858b45798c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735", Pod:"calico-apiserver-858b45798c-lxwx2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid877255ac7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:12.266195 containerd[1464]: 2025-09-13 00:11:12.197 [INFO][5833] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Sep 13 00:11:12.266195 containerd[1464]: 2025-09-13 00:11:12.197 [INFO][5833] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" iface="eth0" netns="" Sep 13 00:11:12.266195 containerd[1464]: 2025-09-13 00:11:12.197 [INFO][5833] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Sep 13 00:11:12.266195 containerd[1464]: 2025-09-13 00:11:12.197 [INFO][5833] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Sep 13 00:11:12.266195 containerd[1464]: 2025-09-13 00:11:12.243 [INFO][5840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" HandleID="k8s-pod-network.700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:11:12.266195 containerd[1464]: 2025-09-13 00:11:12.244 [INFO][5840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:12.266195 containerd[1464]: 2025-09-13 00:11:12.245 [INFO][5840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:12.266195 containerd[1464]: 2025-09-13 00:11:12.258 [WARNING][5840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" HandleID="k8s-pod-network.700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:11:12.266195 containerd[1464]: 2025-09-13 00:11:12.258 [INFO][5840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" HandleID="k8s-pod-network.700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:11:12.266195 containerd[1464]: 2025-09-13 00:11:12.262 [INFO][5840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:12.266195 containerd[1464]: 2025-09-13 00:11:12.264 [INFO][5833] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Sep 13 00:11:12.266195 containerd[1464]: time="2025-09-13T00:11:12.266165579Z" level=info msg="TearDown network for sandbox \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\" successfully" Sep 13 00:11:12.268467 containerd[1464]: time="2025-09-13T00:11:12.266207133Z" level=info msg="StopPodSandbox for \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\" returns successfully" Sep 13 00:11:12.268467 containerd[1464]: time="2025-09-13T00:11:12.267352167Z" level=info msg="RemovePodSandbox for \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\"" Sep 13 00:11:12.268467 containerd[1464]: time="2025-09-13T00:11:12.267411324Z" level=info msg="Forcibly stopping sandbox \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\"" Sep 13 00:11:12.429569 containerd[1464]: 2025-09-13 00:11:12.358 [WARNING][5855] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0", GenerateName:"calico-apiserver-858b45798c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bc4ce4d-5c95-4963-89e0-e92aa88cde6e", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858b45798c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735", Pod:"calico-apiserver-858b45798c-lxwx2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid877255ac7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:12.429569 containerd[1464]: 2025-09-13 00:11:12.359 [INFO][5855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Sep 13 00:11:12.429569 containerd[1464]: 2025-09-13 00:11:12.359 [INFO][5855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" iface="eth0" netns="" Sep 13 00:11:12.429569 containerd[1464]: 2025-09-13 00:11:12.359 [INFO][5855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Sep 13 00:11:12.429569 containerd[1464]: 2025-09-13 00:11:12.359 [INFO][5855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Sep 13 00:11:12.429569 containerd[1464]: 2025-09-13 00:11:12.405 [INFO][5863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" HandleID="k8s-pod-network.700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:11:12.429569 containerd[1464]: 2025-09-13 00:11:12.406 [INFO][5863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:12.429569 containerd[1464]: 2025-09-13 00:11:12.406 [INFO][5863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:12.429569 containerd[1464]: 2025-09-13 00:11:12.415 [WARNING][5863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" HandleID="k8s-pod-network.700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:11:12.429569 containerd[1464]: 2025-09-13 00:11:12.418 [INFO][5863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" HandleID="k8s-pod-network.700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:11:12.429569 containerd[1464]: 2025-09-13 00:11:12.422 [INFO][5863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:12.429569 containerd[1464]: 2025-09-13 00:11:12.424 [INFO][5855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491" Sep 13 00:11:12.430580 containerd[1464]: time="2025-09-13T00:11:12.429669648Z" level=info msg="TearDown network for sandbox \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\" successfully" Sep 13 00:11:12.436648 containerd[1464]: time="2025-09-13T00:11:12.436581836Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:12.436824 containerd[1464]: time="2025-09-13T00:11:12.436699703Z" level=info msg="RemovePodSandbox \"700b0969fdb420816ff461888b35fdcfadf99964aef4a9cb50787db336d63491\" returns successfully" Sep 13 00:11:12.520122 kubelet[2572]: I0913 00:11:12.517827 2572 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:11:12.520122 kubelet[2572]: I0913 00:11:12.517873 2572 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:11:13.721483 kubelet[2572]: I0913 00:11:13.720622 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-st7d7" podStartSLOduration=28.723446717 podStartE2EDuration="44.720593174s" podCreationTimestamp="2025-09-13 00:10:29 +0000 UTC" firstStartedPulling="2025-09-13 00:10:55.491799072 +0000 UTC m=+47.365816024" lastFinishedPulling="2025-09-13 00:11:11.488945522 +0000 UTC m=+63.362962481" observedRunningTime="2025-09-13 00:11:12.004999855 +0000 UTC m=+63.879016818" watchObservedRunningTime="2025-09-13 00:11:13.720593174 +0000 UTC m=+65.594610135" Sep 13 00:11:16.573326 kubelet[2572]: I0913 00:11:16.572954 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:11:16.643587 containerd[1464]: time="2025-09-13T00:11:16.643521272Z" level=info msg="StopContainer for \"d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2\" with timeout 30 (s)" Sep 13 00:11:16.645600 containerd[1464]: time="2025-09-13T00:11:16.645545793Z" level=info msg="Stop container \"d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2\" with signal terminated" Sep 13 00:11:16.696681 systemd[1]: cri-containerd-d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2.scope: Deactivated successfully. Sep 13 00:11:16.697476 systemd[1]: cri-containerd-d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2.scope: Consumed 1.856s CPU time. Sep 13 00:11:16.768929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2-rootfs.mount: Deactivated successfully. Sep 13 00:11:16.779373 systemd[1]: Created slice kubepods-besteffort-pod647099a5_2925_46d0_8904_0552fbcbef1c.slice - libcontainer container kubepods-besteffort-pod647099a5_2925_46d0_8904_0552fbcbef1c.slice. Sep 13 00:11:16.850780 kubelet[2572]: I0913 00:11:16.850542 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/647099a5-2925-46d0-8904-0552fbcbef1c-calico-apiserver-certs\") pod \"calico-apiserver-77b67c64b5-vd2w9\" (UID: \"647099a5-2925-46d0-8904-0552fbcbef1c\") " pod="calico-apiserver/calico-apiserver-77b67c64b5-vd2w9" Sep 13 00:11:16.850780 kubelet[2572]: I0913 00:11:16.850626 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t57dh\" (UniqueName: \"kubernetes.io/projected/647099a5-2925-46d0-8904-0552fbcbef1c-kube-api-access-t57dh\") pod \"calico-apiserver-77b67c64b5-vd2w9\" (UID: \"647099a5-2925-46d0-8904-0552fbcbef1c\") " pod="calico-apiserver/calico-apiserver-77b67c64b5-vd2w9" Sep 13 00:11:17.085043 containerd[1464]: time="2025-09-13T00:11:17.084971840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b67c64b5-vd2w9,Uid:647099a5-2925-46d0-8904-0552fbcbef1c,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:11:17.970674 containerd[1464]: time="2025-09-13T00:11:17.970456758Z" level=info msg="shim disconnected" id=d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2 namespace=k8s.io Sep 13 00:11:17.970674 containerd[1464]: time="2025-09-13T00:11:17.970545946Z" level=warning msg="cleaning up after shim disconnected" id=d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2 namespace=k8s.io Sep 13 00:11:17.970674 containerd[1464]: time="2025-09-13T00:11:17.970561727Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:11:18.027143 containerd[1464]: time="2025-09-13T00:11:18.026934077Z" level=info msg="StopContainer for \"d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2\" returns successfully" Sep 13 00:11:18.029400 containerd[1464]: time="2025-09-13T00:11:18.029338286Z" level=info msg="StopPodSandbox for \"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735\"" Sep 13 00:11:18.029557 containerd[1464]: time="2025-09-13T00:11:18.029414280Z" level=info msg="Container to stop \"d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:11:18.040712 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735-shm.mount: Deactivated successfully. Sep 13 00:11:18.057854 systemd[1]: cri-containerd-f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735.scope: Deactivated successfully. Sep 13 00:11:18.118787 containerd[1464]: time="2025-09-13T00:11:18.118699991Z" level=info msg="shim disconnected" id=f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735 namespace=k8s.io Sep 13 00:11:18.118787 containerd[1464]: time="2025-09-13T00:11:18.118789375Z" level=warning msg="cleaning up after shim disconnected" id=f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735 namespace=k8s.io Sep 13 00:11:18.119490 containerd[1464]: time="2025-09-13T00:11:18.118804216Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:11:18.120542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735-rootfs.mount: Deactivated successfully. Sep 13 00:11:18.242794 systemd-networkd[1370]: calia8640855205: Link UP Sep 13 00:11:18.244087 systemd-networkd[1370]: calia8640855205: Gained carrier Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.084 [INFO][5902] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--vd2w9-eth0 calico-apiserver-77b67c64b5- calico-apiserver 647099a5-2925-46d0-8904-0552fbcbef1c 1162 0 2025-09-13 00:11:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77b67c64b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c calico-apiserver-77b67c64b5-vd2w9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia8640855205 [] [] }} ContainerID="8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-vd2w9" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--vd2w9-" Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.084 [INFO][5902] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-vd2w9" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--vd2w9-eth0" Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.156 [INFO][5943] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" HandleID="k8s-pod-network.8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--vd2w9-eth0" Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.157 [INFO][5943] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" HandleID="k8s-pod-network.8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--vd2w9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", "pod":"calico-apiserver-77b67c64b5-vd2w9", "timestamp":"2025-09-13 00:11:18.156921006 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.157 [INFO][5943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.157 [INFO][5943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.157 [INFO][5943] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c' Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.173 [INFO][5943] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.180 [INFO][5943] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.192 [INFO][5943] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.196 [INFO][5943] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.201 [INFO][5943] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.202 [INFO][5943] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.204 [INFO][5943] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.215 [INFO][5943] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.228 [INFO][5943] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.74/26] block=192.168.61.64/26 handle="k8s-pod-network.8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.229 [INFO][5943] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.74/26] handle="k8s-pod-network.8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" host="ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c" Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.229 [INFO][5943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:18.278299 containerd[1464]: 2025-09-13 00:11:18.229 [INFO][5943] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.74/26] IPv6=[] ContainerID="8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" HandleID="k8s-pod-network.8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--vd2w9-eth0" Sep 13 00:11:18.282726 containerd[1464]: 2025-09-13 00:11:18.233 [INFO][5902] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-vd2w9" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--vd2w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--vd2w9-eth0", GenerateName:"calico-apiserver-77b67c64b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"647099a5-2925-46d0-8904-0552fbcbef1c", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b67c64b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"", Pod:"calico-apiserver-77b67c64b5-vd2w9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia8640855205", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:18.282726 containerd[1464]: 2025-09-13 00:11:18.233 [INFO][5902] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.74/32] ContainerID="8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-vd2w9" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--vd2w9-eth0" Sep 13 00:11:18.282726 containerd[1464]: 2025-09-13 00:11:18.233 [INFO][5902] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8640855205 ContainerID="8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-vd2w9" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--vd2w9-eth0" Sep 13 00:11:18.282726 containerd[1464]: 2025-09-13 00:11:18.244 [INFO][5902] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-vd2w9" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--vd2w9-eth0" Sep 13 00:11:18.282726 containerd[1464]: 2025-09-13 00:11:18.247 [INFO][5902] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-vd2w9" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--vd2w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--vd2w9-eth0", GenerateName:"calico-apiserver-77b67c64b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"647099a5-2925-46d0-8904-0552fbcbef1c", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b67c64b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c", ContainerID:"8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d", Pod:"calico-apiserver-77b67c64b5-vd2w9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia8640855205", MAC:"92:07:40:67:42:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:18.282726 containerd[1464]: 2025-09-13 00:11:18.274 [INFO][5902] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d" Namespace="calico-apiserver" Pod="calico-apiserver-77b67c64b5-vd2w9" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--77b67c64b5--vd2w9-eth0" Sep 13 00:11:18.327298 systemd-networkd[1370]: calid877255ac7b: Link DOWN Sep 13 00:11:18.328670 systemd-networkd[1370]: calid877255ac7b: Lost carrier Sep 13 00:11:18.341726 containerd[1464]: time="2025-09-13T00:11:18.338277172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:18.341726 containerd[1464]: time="2025-09-13T00:11:18.338363677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:18.341726 containerd[1464]: time="2025-09-13T00:11:18.338695649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:18.341726 containerd[1464]: time="2025-09-13T00:11:18.338851956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:18.399941 systemd[1]: Started cri-containerd-8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d.scope - libcontainer container 8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d. Sep 13 00:11:18.516554 containerd[1464]: 2025-09-13 00:11:18.323 [INFO][5971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Sep 13 00:11:18.516554 containerd[1464]: 2025-09-13 00:11:18.323 [INFO][5971] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" iface="eth0" netns="/var/run/netns/cni-73d139e1-a082-79e6-7df5-6e1682b5a6f1" Sep 13 00:11:18.516554 containerd[1464]: 2025-09-13 00:11:18.324 [INFO][5971] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" iface="eth0" netns="/var/run/netns/cni-73d139e1-a082-79e6-7df5-6e1682b5a6f1" Sep 13 00:11:18.516554 containerd[1464]: 2025-09-13 00:11:18.343 [INFO][5971] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" after=19.31191ms iface="eth0" netns="/var/run/netns/cni-73d139e1-a082-79e6-7df5-6e1682b5a6f1" Sep 13 00:11:18.516554 containerd[1464]: 2025-09-13 00:11:18.343 [INFO][5971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Sep 13 00:11:18.516554 containerd[1464]: 2025-09-13 00:11:18.343 [INFO][5971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Sep 13 00:11:18.516554 containerd[1464]: 2025-09-13 00:11:18.424 [INFO][6007] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" HandleID="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:11:18.516554 containerd[1464]: 2025-09-13 00:11:18.425 [INFO][6007] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:18.516554 containerd[1464]: 2025-09-13 00:11:18.425 [INFO][6007] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:18.516554 containerd[1464]: 2025-09-13 00:11:18.507 [INFO][6007] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" HandleID="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:11:18.516554 containerd[1464]: 2025-09-13 00:11:18.507 [INFO][6007] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" HandleID="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:11:18.516554 containerd[1464]: 2025-09-13 00:11:18.510 [INFO][6007] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:18.516554 containerd[1464]: 2025-09-13 00:11:18.513 [INFO][5971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Sep 13 00:11:18.518630 containerd[1464]: time="2025-09-13T00:11:18.517977628Z" level=info msg="TearDown network for sandbox \"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735\" successfully" Sep 13 00:11:18.518630 containerd[1464]: time="2025-09-13T00:11:18.518018944Z" level=info msg="StopPodSandbox for \"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735\" returns successfully" Sep 13 00:11:18.579511 containerd[1464]: time="2025-09-13T00:11:18.579445705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b67c64b5-vd2w9,Uid:647099a5-2925-46d0-8904-0552fbcbef1c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d\"" Sep 13 00:11:18.586626 containerd[1464]: time="2025-09-13T00:11:18.586570969Z" level=info msg="CreateContainer within sandbox \"8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:11:18.603095 containerd[1464]: time="2025-09-13T00:11:18.602949030Z" level=info msg="CreateContainer within sandbox \"8eccf66bb2ac0aa5fff5082d43d0c02db101c762dec8c76362c7eb358b36c88d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"22b96602edf2df54c2c1701422d302cf26b6127952f5b433aca40d9099a7b55b\"" Sep 13 00:11:18.603914 containerd[1464]: time="2025-09-13T00:11:18.603868081Z" level=info msg="StartContainer for \"22b96602edf2df54c2c1701422d302cf26b6127952f5b433aca40d9099a7b55b\"" Sep 13 00:11:18.649594 systemd[1]: Started cri-containerd-22b96602edf2df54c2c1701422d302cf26b6127952f5b433aca40d9099a7b55b.scope - libcontainer container 22b96602edf2df54c2c1701422d302cf26b6127952f5b433aca40d9099a7b55b. Sep 13 00:11:18.673489 kubelet[2572]: I0913 00:11:18.672806 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j6qc\" (UniqueName: \"kubernetes.io/projected/4bc4ce4d-5c95-4963-89e0-e92aa88cde6e-kube-api-access-6j6qc\") pod \"4bc4ce4d-5c95-4963-89e0-e92aa88cde6e\" (UID: \"4bc4ce4d-5c95-4963-89e0-e92aa88cde6e\") " Sep 13 00:11:18.673489 kubelet[2572]: I0913 00:11:18.672890 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4bc4ce4d-5c95-4963-89e0-e92aa88cde6e-calico-apiserver-certs\") pod \"4bc4ce4d-5c95-4963-89e0-e92aa88cde6e\" (UID: \"4bc4ce4d-5c95-4963-89e0-e92aa88cde6e\") " Sep 13 00:11:18.681996 kubelet[2572]: I0913 00:11:18.680465 2572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc4ce4d-5c95-4963-89e0-e92aa88cde6e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "4bc4ce4d-5c95-4963-89e0-e92aa88cde6e" (UID: "4bc4ce4d-5c95-4963-89e0-e92aa88cde6e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:11:18.685332 kubelet[2572]: I0913 00:11:18.685278 2572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc4ce4d-5c95-4963-89e0-e92aa88cde6e-kube-api-access-6j6qc" (OuterVolumeSpecName: "kube-api-access-6j6qc") pod "4bc4ce4d-5c95-4963-89e0-e92aa88cde6e" (UID: "4bc4ce4d-5c95-4963-89e0-e92aa88cde6e"). InnerVolumeSpecName "kube-api-access-6j6qc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:11:18.732184 containerd[1464]: time="2025-09-13T00:11:18.732118176Z" level=info msg="StartContainer for \"22b96602edf2df54c2c1701422d302cf26b6127952f5b433aca40d9099a7b55b\" returns successfully" Sep 13 00:11:18.775380 kubelet[2572]: I0913 00:11:18.774255 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6j6qc\" (UniqueName: \"kubernetes.io/projected/4bc4ce4d-5c95-4963-89e0-e92aa88cde6e-kube-api-access-6j6qc\") on node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" DevicePath \"\"" Sep 13 00:11:18.775380 kubelet[2572]: I0913 00:11:18.774312 2572 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4bc4ce4d-5c95-4963-89e0-e92aa88cde6e-calico-apiserver-certs\") on node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" DevicePath \"\"" Sep 13 00:11:18.990388 systemd[1]: run-netns-cni\x2d73d139e1\x2da082\x2d79e6\x2d7df5\x2d6e1682b5a6f1.mount: Deactivated successfully. Sep 13 00:11:18.990825 systemd[1]: var-lib-kubelet-pods-4bc4ce4d\x2d5c95\x2d4963\x2d89e0\x2de92aa88cde6e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6j6qc.mount: Deactivated successfully. Sep 13 00:11:18.992876 systemd[1]: var-lib-kubelet-pods-4bc4ce4d\x2d5c95\x2d4963\x2d89e0\x2de92aa88cde6e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 00:11:19.008978 kubelet[2572]: I0913 00:11:19.008939 2572 scope.go:117] "RemoveContainer" containerID="d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2" Sep 13 00:11:19.019163 containerd[1464]: time="2025-09-13T00:11:19.019103578Z" level=info msg="RemoveContainer for \"d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2\"" Sep 13 00:11:19.040396 containerd[1464]: time="2025-09-13T00:11:19.037919610Z" level=info msg="RemoveContainer for \"d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2\" returns successfully" Sep 13 00:11:19.044612 kubelet[2572]: I0913 00:11:19.044570 2572 scope.go:117] "RemoveContainer" containerID="d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2" Sep 13 00:11:19.047300 containerd[1464]: time="2025-09-13T00:11:19.045774778Z" level=error msg="ContainerStatus for \"d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2\": not found" Sep 13 00:11:19.047784 systemd[1]: Removed slice kubepods-besteffort-pod4bc4ce4d_5c95_4963_89e0_e92aa88cde6e.slice - libcontainer container kubepods-besteffort-pod4bc4ce4d_5c95_4963_89e0_e92aa88cde6e.slice. Sep 13 00:11:19.050252 kubelet[2572]: E0913 00:11:19.047531 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2\": not found" containerID="d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2" Sep 13 00:11:19.050252 kubelet[2572]: I0913 00:11:19.048329 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2"} err="failed to get container status \"d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8c20b86361ef8141437374d4914a4a6c03b4deba03a08e8441233228fe453d2\": not found" Sep 13 00:11:19.048388 systemd[1]: kubepods-besteffort-pod4bc4ce4d_5c95_4963_89e0_e92aa88cde6e.slice: Consumed 1.899s CPU time. Sep 13 00:11:19.065863 kubelet[2572]: I0913 00:11:19.063058 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77b67c64b5-vd2w9" podStartSLOduration=3.063032834 podStartE2EDuration="3.063032834s" podCreationTimestamp="2025-09-13 00:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:19.056839901 +0000 UTC m=+70.930856890" watchObservedRunningTime="2025-09-13 00:11:19.063032834 +0000 UTC m=+70.937049797" Sep 13 00:11:19.577531 systemd-networkd[1370]: calia8640855205: Gained IPv6LL Sep 13 00:11:20.337191 kubelet[2572]: I0913 00:11:20.337137 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc4ce4d-5c95-4963-89e0-e92aa88cde6e" path="/var/lib/kubelet/pods/4bc4ce4d-5c95-4963-89e0-e92aa88cde6e/volumes" Sep 13 00:11:21.662496 containerd[1464]: time="2025-09-13T00:11:21.662440381Z" level=info msg="StopContainer for \"91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb\" with timeout 30 (s)" Sep 13 00:11:21.665346 containerd[1464]: time="2025-09-13T00:11:21.663904939Z" level=info msg="Stop container \"91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb\" with signal terminated" Sep 13 00:11:21.718306 systemd[1]: cri-containerd-91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb.scope: Deactivated successfully. Sep 13 00:11:21.718873 systemd[1]: cri-containerd-91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb.scope: Consumed 1.839s CPU time. Sep 13 00:11:21.771266 containerd[1464]: time="2025-09-13T00:11:21.767301256Z" level=info msg="shim disconnected" id=91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb namespace=k8s.io Sep 13 00:11:21.771266 containerd[1464]: time="2025-09-13T00:11:21.767397610Z" level=warning msg="cleaning up after shim disconnected" id=91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb namespace=k8s.io Sep 13 00:11:21.771266 containerd[1464]: time="2025-09-13T00:11:21.767415452Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:11:21.782960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb-rootfs.mount: Deactivated successfully. Sep 13 00:11:21.849707 containerd[1464]: time="2025-09-13T00:11:21.849652736Z" level=info msg="StopContainer for \"91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb\" returns successfully" Sep 13 00:11:21.851053 containerd[1464]: time="2025-09-13T00:11:21.850974247Z" level=info msg="StopPodSandbox for \"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b\"" Sep 13 00:11:21.852100 containerd[1464]: time="2025-09-13T00:11:21.851949983Z" level=info msg="Container to stop \"91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:11:21.865205 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b-shm.mount: Deactivated successfully. Sep 13 00:11:21.879361 systemd[1]: cri-containerd-68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b.scope: Deactivated successfully. Sep 13 00:11:21.936307 ntpd[1427]: Listen normally on 19 calia8640855205 [fe80::ecee:eeff:feee:eeee%16]:123 Sep 13 00:11:21.937217 ntpd[1427]: 13 Sep 00:11:21 ntpd[1427]: Listen normally on 19 calia8640855205 [fe80::ecee:eeff:feee:eeee%16]:123 Sep 13 00:11:21.937217 ntpd[1427]: 13 Sep 00:11:21 ntpd[1427]: Deleting interface #18 calid877255ac7b, fe80::ecee:eeff:feee:eeee%15#123, interface stats: received=0, sent=0, dropped=0, active_time=20 secs Sep 13 00:11:21.940415 containerd[1464]: time="2025-09-13T00:11:21.934858462Z" level=info msg="shim disconnected" id=68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b namespace=k8s.io Sep 13 00:11:21.940415 containerd[1464]: time="2025-09-13T00:11:21.934928725Z" level=warning msg="cleaning up after shim disconnected" id=68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b namespace=k8s.io Sep 13 00:11:21.940415 containerd[1464]: time="2025-09-13T00:11:21.934946788Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:11:21.936391 ntpd[1427]: Deleting interface #18 calid877255ac7b, fe80::ecee:eeff:feee:eeee%15#123, interface stats: received=0, sent=0, dropped=0, active_time=20 secs Sep 13 00:11:21.951524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b-rootfs.mount: Deactivated successfully. Sep 13 00:11:22.037259 kubelet[2572]: I0913 00:11:22.036562 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Sep 13 00:11:22.108398 systemd-networkd[1370]: cali414600e7f9e: Link DOWN Sep 13 00:11:22.108878 systemd-networkd[1370]: cali414600e7f9e: Lost carrier Sep 13 00:11:22.241730 containerd[1464]: 2025-09-13 00:11:22.103 [INFO][6207] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Sep 13 00:11:22.241730 containerd[1464]: 2025-09-13 00:11:22.103 [INFO][6207] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" iface="eth0" netns="/var/run/netns/cni-e04376fe-42e7-44db-4e49-0a4d5612760e" Sep 13 00:11:22.241730 containerd[1464]: 2025-09-13 00:11:22.103 [INFO][6207] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" iface="eth0" netns="/var/run/netns/cni-e04376fe-42e7-44db-4e49-0a4d5612760e" Sep 13 00:11:22.241730 containerd[1464]: 2025-09-13 00:11:22.116 [INFO][6207] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" after=12.697031ms iface="eth0" netns="/var/run/netns/cni-e04376fe-42e7-44db-4e49-0a4d5612760e" Sep 13 00:11:22.241730 containerd[1464]: 2025-09-13 00:11:22.116 [INFO][6207] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Sep 13 00:11:22.241730 containerd[1464]: 2025-09-13 00:11:22.116 [INFO][6207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Sep 13 00:11:22.241730 containerd[1464]: 2025-09-13 00:11:22.165 [INFO][6215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" HandleID="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:11:22.241730 containerd[1464]: 2025-09-13 00:11:22.166 [INFO][6215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:22.241730 containerd[1464]: 2025-09-13 00:11:22.166 [INFO][6215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:22.241730 containerd[1464]: 2025-09-13 00:11:22.230 [INFO][6215] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" HandleID="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:11:22.241730 containerd[1464]: 2025-09-13 00:11:22.231 [INFO][6215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" HandleID="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:11:22.241730 containerd[1464]: 2025-09-13 00:11:22.233 [INFO][6215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:22.241730 containerd[1464]: 2025-09-13 00:11:22.238 [INFO][6207] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Sep 13 00:11:22.246996 containerd[1464]: time="2025-09-13T00:11:22.242082369Z" level=info msg="TearDown network for sandbox \"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b\" successfully" Sep 13 00:11:22.246996 containerd[1464]: time="2025-09-13T00:11:22.242122658Z" level=info msg="StopPodSandbox for \"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b\" returns successfully" Sep 13 00:11:22.257279 systemd[1]: run-netns-cni\x2de04376fe\x2d42e7\x2d44db\x2d4e49\x2d0a4d5612760e.mount: Deactivated successfully. Sep 13 00:11:22.409496 kubelet[2572]: I0913 00:11:22.409429 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a122eeee-1c48-4d4d-8b11-6b176f51757c-calico-apiserver-certs\") pod \"a122eeee-1c48-4d4d-8b11-6b176f51757c\" (UID: \"a122eeee-1c48-4d4d-8b11-6b176f51757c\") " Sep 13 00:11:22.409697 kubelet[2572]: I0913 00:11:22.409526 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wn7w\" (UniqueName: \"kubernetes.io/projected/a122eeee-1c48-4d4d-8b11-6b176f51757c-kube-api-access-7wn7w\") pod \"a122eeee-1c48-4d4d-8b11-6b176f51757c\" (UID: \"a122eeee-1c48-4d4d-8b11-6b176f51757c\") " Sep 13 00:11:22.420281 kubelet[2572]: I0913 00:11:22.418055 2572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a122eeee-1c48-4d4d-8b11-6b176f51757c-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "a122eeee-1c48-4d4d-8b11-6b176f51757c" (UID: "a122eeee-1c48-4d4d-8b11-6b176f51757c"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:11:22.422267 kubelet[2572]: I0913 00:11:22.421282 2572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a122eeee-1c48-4d4d-8b11-6b176f51757c-kube-api-access-7wn7w" (OuterVolumeSpecName: "kube-api-access-7wn7w") pod "a122eeee-1c48-4d4d-8b11-6b176f51757c" (UID: "a122eeee-1c48-4d4d-8b11-6b176f51757c"). InnerVolumeSpecName "kube-api-access-7wn7w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:11:22.430476 systemd[1]: var-lib-kubelet-pods-a122eeee\x2d1c48\x2d4d4d\x2d8b11\x2d6b176f51757c-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 00:11:22.511139 kubelet[2572]: I0913 00:11:22.510932 2572 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a122eeee-1c48-4d4d-8b11-6b176f51757c-calico-apiserver-certs\") on node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" DevicePath \"\"" Sep 13 00:11:22.511139 kubelet[2572]: I0913 00:11:22.510990 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7wn7w\" (UniqueName: \"kubernetes.io/projected/a122eeee-1c48-4d4d-8b11-6b176f51757c-kube-api-access-7wn7w\") on node \"ci-4081-3-5-nightly-20250912-2100-20bc6d60f49c8c59945c\" DevicePath \"\"" Sep 13 00:11:22.776791 systemd[1]: var-lib-kubelet-pods-a122eeee\x2d1c48\x2d4d4d\x2d8b11\x2d6b176f51757c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7wn7w.mount: Deactivated successfully. Sep 13 00:11:23.053751 systemd[1]: Removed slice kubepods-besteffort-poda122eeee_1c48_4d4d_8b11_6b176f51757c.slice - libcontainer container kubepods-besteffort-poda122eeee_1c48_4d4d_8b11_6b176f51757c.slice. Sep 13 00:11:23.053959 systemd[1]: kubepods-besteffort-poda122eeee_1c48_4d4d_8b11_6b176f51757c.slice: Consumed 1.894s CPU time. Sep 13 00:11:24.334440 kubelet[2572]: I0913 00:11:24.334362 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a122eeee-1c48-4d4d-8b11-6b176f51757c" path="/var/lib/kubelet/pods/a122eeee-1c48-4d4d-8b11-6b176f51757c/volumes" Sep 13 00:11:24.935991 ntpd[1427]: Deleting interface #13 cali414600e7f9e, fe80::ecee:eeff:feee:eeee%10#123, interface stats: received=0, sent=0, dropped=0, active_time=23 secs Sep 13 00:11:24.936568 ntpd[1427]: 13 Sep 00:11:24 ntpd[1427]: Deleting interface #13 cali414600e7f9e, fe80::ecee:eeff:feee:eeee%10#123, interface stats: received=0, sent=0, dropped=0, active_time=23 secs Sep 13 00:11:29.988729 systemd[1]: Started sshd@8-10.128.0.50:22-147.75.109.163:32860.service - OpenSSH per-connection server daemon (147.75.109.163:32860). Sep 13 00:11:30.398551 sshd[6240]: Accepted publickey for core from 147.75.109.163 port 32860 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:11:30.401825 sshd[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:30.414463 systemd-logind[1439]: New session 8 of user core. Sep 13 00:11:30.416901 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:11:30.826818 sshd[6240]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:30.835503 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:11:30.840586 systemd[1]: sshd@8-10.128.0.50:22-147.75.109.163:32860.service: Deactivated successfully. Sep 13 00:11:30.846319 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:11:30.849335 systemd-logind[1439]: Removed session 8. Sep 13 00:11:30.895314 systemd[1]: run-containerd-runc-k8s.io-987e38f737e0a75d3d6ac04b26b22a4a6dfccec40847c76aa458ba37c0ae9879-runc.S0vEnD.mount: Deactivated successfully. Sep 13 00:11:35.906407 systemd[1]: Started sshd@9-10.128.0.50:22-147.75.109.163:53584.service - OpenSSH per-connection server daemon (147.75.109.163:53584). Sep 13 00:11:36.317019 sshd[6305]: Accepted publickey for core from 147.75.109.163 port 53584 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:11:36.319847 sshd[6305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:36.335645 systemd-logind[1439]: New session 9 of user core. Sep 13 00:11:36.340676 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:11:36.735591 sshd[6305]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:36.745034 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:11:36.746328 systemd[1]: sshd@9-10.128.0.50:22-147.75.109.163:53584.service: Deactivated successfully. Sep 13 00:11:36.752647 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:11:36.757915 systemd-logind[1439]: Removed session 9. Sep 13 00:11:41.814392 systemd[1]: Started sshd@10-10.128.0.50:22-147.75.109.163:41586.service - OpenSSH per-connection server daemon (147.75.109.163:41586). Sep 13 00:11:42.223983 sshd[6362]: Accepted publickey for core from 147.75.109.163 port 41586 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:11:42.227123 sshd[6362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:42.239595 systemd-logind[1439]: New session 10 of user core. Sep 13 00:11:42.248045 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:11:42.636551 sshd[6362]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:42.644282 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:11:42.647027 systemd[1]: sshd@10-10.128.0.50:22-147.75.109.163:41586.service: Deactivated successfully. Sep 13 00:11:42.653123 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:11:42.657852 systemd-logind[1439]: Removed session 10. Sep 13 00:11:42.710375 systemd[1]: Started sshd@11-10.128.0.50:22-147.75.109.163:41598.service - OpenSSH per-connection server daemon (147.75.109.163:41598). Sep 13 00:11:43.104514 sshd[6378]: Accepted publickey for core from 147.75.109.163 port 41598 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:11:43.106677 sshd[6378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:43.119861 systemd-logind[1439]: New session 11 of user core. Sep 13 00:11:43.126841 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:11:43.592165 sshd[6378]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:43.600181 systemd[1]: sshd@11-10.128.0.50:22-147.75.109.163:41598.service: Deactivated successfully. Sep 13 00:11:43.600556 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:11:43.607005 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:11:43.609908 systemd-logind[1439]: Removed session 11. Sep 13 00:11:43.666406 systemd[1]: Started sshd@12-10.128.0.50:22-147.75.109.163:41610.service - OpenSSH per-connection server daemon (147.75.109.163:41610). Sep 13 00:11:44.073341 sshd[6390]: Accepted publickey for core from 147.75.109.163 port 41610 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:11:44.074398 sshd[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:44.087746 systemd-logind[1439]: New session 12 of user core. Sep 13 00:11:44.092504 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:11:44.479653 sshd[6390]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:44.489145 systemd[1]: sshd@12-10.128.0.50:22-147.75.109.163:41610.service: Deactivated successfully. Sep 13 00:11:44.495804 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:11:44.498610 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:11:44.501498 systemd-logind[1439]: Removed session 12. Sep 13 00:11:49.555398 systemd[1]: Started sshd@13-10.128.0.50:22-147.75.109.163:41622.service - OpenSSH per-connection server daemon (147.75.109.163:41622). Sep 13 00:11:49.954495 sshd[6434]: Accepted publickey for core from 147.75.109.163 port 41622 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:11:49.957690 sshd[6434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:49.969505 systemd-logind[1439]: New session 13 of user core. Sep 13 00:11:49.975519 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:11:50.428588 sshd[6434]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:50.439799 systemd[1]: sshd@13-10.128.0.50:22-147.75.109.163:41622.service: Deactivated successfully. Sep 13 00:11:50.449651 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:11:50.458952 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:11:50.462793 systemd-logind[1439]: Removed session 13. Sep 13 00:11:55.503241 systemd[1]: Started sshd@14-10.128.0.50:22-147.75.109.163:36320.service - OpenSSH per-connection server daemon (147.75.109.163:36320). Sep 13 00:11:55.919283 sshd[6447]: Accepted publickey for core from 147.75.109.163 port 36320 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:11:55.920593 sshd[6447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:55.929482 systemd-logind[1439]: New session 14 of user core. Sep 13 00:11:55.935477 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:11:56.334171 sshd[6447]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:56.344540 systemd[1]: sshd@14-10.128.0.50:22-147.75.109.163:36320.service: Deactivated successfully. Sep 13 00:11:56.347872 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:11:56.350034 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:11:56.351923 systemd-logind[1439]: Removed session 14. Sep 13 00:12:01.402706 systemd[1]: Started sshd@15-10.128.0.50:22-147.75.109.163:38682.service - OpenSSH per-connection server daemon (147.75.109.163:38682). Sep 13 00:12:01.782342 sshd[6480]: Accepted publickey for core from 147.75.109.163 port 38682 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:12:01.784466 sshd[6480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:01.793135 systemd-logind[1439]: New session 15 of user core. Sep 13 00:12:01.801909 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:12:02.175800 sshd[6480]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:02.188952 systemd[1]: sshd@15-10.128.0.50:22-147.75.109.163:38682.service: Deactivated successfully. Sep 13 00:12:02.189602 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:12:02.198591 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:12:02.203060 systemd-logind[1439]: Removed session 15. Sep 13 00:12:07.250080 systemd[1]: Started sshd@16-10.128.0.50:22-147.75.109.163:38688.service - OpenSSH per-connection server daemon (147.75.109.163:38688). Sep 13 00:12:07.644130 sshd[6495]: Accepted publickey for core from 147.75.109.163 port 38688 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:12:07.646117 sshd[6495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:07.659437 systemd-logind[1439]: New session 16 of user core. Sep 13 00:12:07.665481 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:12:08.038592 sshd[6495]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:08.045880 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:12:08.047120 systemd[1]: sshd@16-10.128.0.50:22-147.75.109.163:38688.service: Deactivated successfully. Sep 13 00:12:08.051205 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:12:08.059570 systemd-logind[1439]: Removed session 16. Sep 13 00:12:08.116413 systemd[1]: Started sshd@17-10.128.0.50:22-147.75.109.163:38704.service - OpenSSH per-connection server daemon (147.75.109.163:38704). Sep 13 00:12:08.531820 sshd[6508]: Accepted publickey for core from 147.75.109.163 port 38704 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:12:08.534266 sshd[6508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:08.552452 systemd-logind[1439]: New session 17 of user core. Sep 13 00:12:08.559488 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:12:09.024572 sshd[6508]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:09.034781 systemd[1]: sshd@17-10.128.0.50:22-147.75.109.163:38704.service: Deactivated successfully. Sep 13 00:12:09.040758 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:12:09.042711 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:12:09.046004 systemd-logind[1439]: Removed session 17. Sep 13 00:12:09.101388 systemd[1]: Started sshd@18-10.128.0.50:22-147.75.109.163:38706.service - OpenSSH per-connection server daemon (147.75.109.163:38706). Sep 13 00:12:09.513134 sshd[6541]: Accepted publickey for core from 147.75.109.163 port 38706 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:12:09.514289 sshd[6541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:09.526773 systemd-logind[1439]: New session 18 of user core. Sep 13 00:12:09.532081 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:12:10.776514 sshd[6541]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:10.783430 systemd[1]: sshd@18-10.128.0.50:22-147.75.109.163:38706.service: Deactivated successfully. Sep 13 00:12:10.788503 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:12:10.789704 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:12:10.791879 systemd-logind[1439]: Removed session 18. Sep 13 00:12:10.851833 systemd[1]: Started sshd@19-10.128.0.50:22-147.75.109.163:57496.service - OpenSSH per-connection server daemon (147.75.109.163:57496). Sep 13 00:12:11.255850 sshd[6560]: Accepted publickey for core from 147.75.109.163 port 57496 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:12:11.257900 sshd[6560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:11.269658 systemd-logind[1439]: New session 19 of user core. Sep 13 00:12:11.275953 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:12:11.951539 sshd[6560]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:11.964986 systemd[1]: sshd@19-10.128.0.50:22-147.75.109.163:57496.service: Deactivated successfully. Sep 13 00:12:11.972060 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:12:11.973860 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:12:11.976902 systemd-logind[1439]: Removed session 19. Sep 13 00:12:12.024484 systemd[1]: Started sshd@20-10.128.0.50:22-147.75.109.163:57498.service - OpenSSH per-connection server daemon (147.75.109.163:57498). Sep 13 00:12:12.432443 sshd[6577]: Accepted publickey for core from 147.75.109.163 port 57498 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:12:12.435786 sshd[6577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:12.446163 kubelet[2572]: I0913 00:12:12.445189 2572 scope.go:117] "RemoveContainer" containerID="91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb" Sep 13 00:12:12.448885 systemd-logind[1439]: New session 20 of user core. Sep 13 00:12:12.453450 containerd[1464]: time="2025-09-13T00:12:12.452942627Z" level=info msg="RemoveContainer for \"91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb\"" Sep 13 00:12:12.453579 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:12:12.467072 containerd[1464]: time="2025-09-13T00:12:12.466946113Z" level=info msg="RemoveContainer for \"91f5cd0fef3536bb38bb281fad7355848fe02c3f24a19cc23722f81b0612addb\" returns successfully" Sep 13 00:12:12.470215 containerd[1464]: time="2025-09-13T00:12:12.468930619Z" level=info msg="StopPodSandbox for \"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735\"" Sep 13 00:12:12.602895 containerd[1464]: 2025-09-13 00:12:12.534 [WARNING][6588] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:12:12.602895 containerd[1464]: 2025-09-13 00:12:12.534 [INFO][6588] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Sep 13 00:12:12.602895 containerd[1464]: 2025-09-13 00:12:12.534 [INFO][6588] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" iface="eth0" netns="" Sep 13 00:12:12.602895 containerd[1464]: 2025-09-13 00:12:12.534 [INFO][6588] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Sep 13 00:12:12.602895 containerd[1464]: 2025-09-13 00:12:12.535 [INFO][6588] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Sep 13 00:12:12.602895 containerd[1464]: 2025-09-13 00:12:12.583 [INFO][6597] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" HandleID="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:12:12.602895 containerd[1464]: 2025-09-13 00:12:12.583 [INFO][6597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:12.602895 containerd[1464]: 2025-09-13 00:12:12.584 [INFO][6597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:12.602895 containerd[1464]: 2025-09-13 00:12:12.597 [WARNING][6597] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" HandleID="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:12:12.602895 containerd[1464]: 2025-09-13 00:12:12.597 [INFO][6597] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" HandleID="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:12:12.602895 containerd[1464]: 2025-09-13 00:12:12.598 [INFO][6597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:12.602895 containerd[1464]: 2025-09-13 00:12:12.600 [INFO][6588] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Sep 13 00:12:12.603699 containerd[1464]: time="2025-09-13T00:12:12.603000124Z" level=info msg="TearDown network for sandbox \"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735\" successfully" Sep 13 00:12:12.603699 containerd[1464]: time="2025-09-13T00:12:12.603062072Z" level=info msg="StopPodSandbox for \"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735\" returns successfully" Sep 13 00:12:12.604993 containerd[1464]: time="2025-09-13T00:12:12.603859113Z" level=info msg="RemovePodSandbox for \"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735\"" Sep 13 00:12:12.604993 containerd[1464]: time="2025-09-13T00:12:12.603905787Z" level=info msg="Forcibly stopping sandbox \"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735\"" Sep 13 00:12:12.804830 containerd[1464]: 2025-09-13 00:12:12.687 [WARNING][6611] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:12:12.804830 containerd[1464]: 2025-09-13 00:12:12.688 [INFO][6611] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Sep 13 00:12:12.804830 containerd[1464]: 2025-09-13 00:12:12.688 [INFO][6611] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" iface="eth0" netns="" Sep 13 00:12:12.804830 containerd[1464]: 2025-09-13 00:12:12.688 [INFO][6611] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Sep 13 00:12:12.804830 containerd[1464]: 2025-09-13 00:12:12.688 [INFO][6611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Sep 13 00:12:12.804830 containerd[1464]: 2025-09-13 00:12:12.775 [INFO][6626] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" HandleID="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:12:12.804830 containerd[1464]: 2025-09-13 00:12:12.776 [INFO][6626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:12.804830 containerd[1464]: 2025-09-13 00:12:12.776 [INFO][6626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:12.804830 containerd[1464]: 2025-09-13 00:12:12.795 [WARNING][6626] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" HandleID="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:12:12.804830 containerd[1464]: 2025-09-13 00:12:12.795 [INFO][6626] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" HandleID="k8s-pod-network.f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--lxwx2-eth0" Sep 13 00:12:12.804830 containerd[1464]: 2025-09-13 00:12:12.799 [INFO][6626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:12.804830 containerd[1464]: 2025-09-13 00:12:12.802 [INFO][6611] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735" Sep 13 00:12:12.806880 containerd[1464]: time="2025-09-13T00:12:12.805020875Z" level=info msg="TearDown network for sandbox \"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735\" successfully" Sep 13 00:12:12.814413 containerd[1464]: time="2025-09-13T00:12:12.814305048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:12.814699 containerd[1464]: time="2025-09-13T00:12:12.814669021Z" level=info msg="RemovePodSandbox \"f92e2cf6331703f697d2d1bf0b5be593368f47244d35bb58c17506cc63978735\" returns successfully" Sep 13 00:12:12.815455 containerd[1464]: time="2025-09-13T00:12:12.815418880Z" level=info msg="StopPodSandbox for \"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b\"" Sep 13 00:12:12.860054 sshd[6577]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:12.873476 systemd[1]: sshd@20-10.128.0.50:22-147.75.109.163:57498.service: Deactivated successfully. Sep 13 00:12:12.880975 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:12:12.882733 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:12:12.885846 systemd-logind[1439]: Removed session 20. Sep 13 00:12:12.969416 containerd[1464]: 2025-09-13 00:12:12.896 [WARNING][6640] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:12:12.969416 containerd[1464]: 2025-09-13 00:12:12.897 [INFO][6640] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Sep 13 00:12:12.969416 containerd[1464]: 2025-09-13 00:12:12.897 [INFO][6640] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" iface="eth0" netns="" Sep 13 00:12:12.969416 containerd[1464]: 2025-09-13 00:12:12.898 [INFO][6640] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Sep 13 00:12:12.969416 containerd[1464]: 2025-09-13 00:12:12.898 [INFO][6640] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Sep 13 00:12:12.969416 containerd[1464]: 2025-09-13 00:12:12.952 [INFO][6649] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" HandleID="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:12:12.969416 containerd[1464]: 2025-09-13 00:12:12.953 [INFO][6649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:12.969416 containerd[1464]: 2025-09-13 00:12:12.953 [INFO][6649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:12.969416 containerd[1464]: 2025-09-13 00:12:12.962 [WARNING][6649] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" HandleID="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:12:12.969416 containerd[1464]: 2025-09-13 00:12:12.962 [INFO][6649] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" HandleID="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:12:12.969416 containerd[1464]: 2025-09-13 00:12:12.964 [INFO][6649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:12.969416 containerd[1464]: 2025-09-13 00:12:12.966 [INFO][6640] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Sep 13 00:12:12.969416 containerd[1464]: time="2025-09-13T00:12:12.969284813Z" level=info msg="TearDown network for sandbox \"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b\" successfully" Sep 13 00:12:12.969416 containerd[1464]: time="2025-09-13T00:12:12.969322176Z" level=info msg="StopPodSandbox for \"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b\" returns successfully" Sep 13 00:12:12.972471 containerd[1464]: time="2025-09-13T00:12:12.971814067Z" level=info msg="RemovePodSandbox for \"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b\"" Sep 13 00:12:12.972471 containerd[1464]: time="2025-09-13T00:12:12.971868545Z" level=info msg="Forcibly stopping sandbox \"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b\"" Sep 13 00:12:13.108735 containerd[1464]: 2025-09-13 00:12:13.047 [WARNING][6663] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:12:13.108735 containerd[1464]: 2025-09-13 00:12:13.047 [INFO][6663] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Sep 13 00:12:13.108735 containerd[1464]: 2025-09-13 00:12:13.047 [INFO][6663] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" iface="eth0" netns="" Sep 13 00:12:13.108735 containerd[1464]: 2025-09-13 00:12:13.048 [INFO][6663] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Sep 13 00:12:13.108735 containerd[1464]: 2025-09-13 00:12:13.048 [INFO][6663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Sep 13 00:12:13.108735 containerd[1464]: 2025-09-13 00:12:13.091 [INFO][6670] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" HandleID="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:12:13.108735 containerd[1464]: 2025-09-13 00:12:13.092 [INFO][6670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:13.108735 containerd[1464]: 2025-09-13 00:12:13.092 [INFO][6670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:13.108735 containerd[1464]: 2025-09-13 00:12:13.100 [WARNING][6670] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" HandleID="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:12:13.108735 containerd[1464]: 2025-09-13 00:12:13.100 [INFO][6670] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" HandleID="k8s-pod-network.68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Workload="ci--4081--3--5--nightly--20250912--2100--20bc6d60f49c8c59945c-k8s-calico--apiserver--858b45798c--wf79q-eth0" Sep 13 00:12:13.108735 containerd[1464]: 2025-09-13 00:12:13.102 [INFO][6670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:13.108735 containerd[1464]: 2025-09-13 00:12:13.105 [INFO][6663] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b" Sep 13 00:12:13.108735 containerd[1464]: time="2025-09-13T00:12:13.108646388Z" level=info msg="TearDown network for sandbox \"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b\" successfully" Sep 13 00:12:13.120306 containerd[1464]: time="2025-09-13T00:12:13.120214632Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:13.120486 containerd[1464]: time="2025-09-13T00:12:13.120353734Z" level=info msg="RemovePodSandbox \"68fe07d7c171dcc49c5e85992e5505579acfcc5b2ec17261cde751db646c0b8b\" returns successfully" Sep 13 00:12:17.936321 systemd[1]: Started sshd@21-10.128.0.50:22-147.75.109.163:57510.service - OpenSSH per-connection server daemon (147.75.109.163:57510). Sep 13 00:12:18.328845 sshd[6679]: Accepted publickey for core from 147.75.109.163 port 57510 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:12:18.331265 sshd[6679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:18.349671 systemd-logind[1439]: New session 21 of user core. Sep 13 00:12:18.353461 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:12:18.777577 sshd[6679]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:18.786758 systemd[1]: sshd@21-10.128.0.50:22-147.75.109.163:57510.service: Deactivated successfully. Sep 13 00:12:18.795551 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:12:18.799648 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:12:18.803085 systemd-logind[1439]: Removed session 21. Sep 13 00:12:23.853741 systemd[1]: Started sshd@22-10.128.0.50:22-147.75.109.163:59652.service - OpenSSH per-connection server daemon (147.75.109.163:59652). Sep 13 00:12:24.246898 sshd[6715]: Accepted publickey for core from 147.75.109.163 port 59652 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:12:24.249577 sshd[6715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:24.262293 systemd-logind[1439]: New session 22 of user core. Sep 13 00:12:24.266469 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:12:24.639355 sshd[6715]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:24.649997 systemd[1]: sshd@22-10.128.0.50:22-147.75.109.163:59652.service: Deactivated successfully. Sep 13 00:12:24.652347 systemd-logind[1439]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:12:24.656403 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:12:24.663030 systemd-logind[1439]: Removed session 22. Sep 13 00:12:29.718769 systemd[1]: Started sshd@23-10.128.0.50:22-147.75.109.163:59662.service - OpenSSH per-connection server daemon (147.75.109.163:59662). Sep 13 00:12:30.128263 sshd[6749]: Accepted publickey for core from 147.75.109.163 port 59662 ssh2: RSA SHA256:pkRKmUiE0sy6yy9Fgooa/o0g/uQ3gDVemzZTk3xRzGU Sep 13 00:12:30.129279 sshd[6749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:30.138777 systemd-logind[1439]: New session 23 of user core. Sep 13 00:12:30.146516 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:12:30.520016 sshd[6749]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:30.529400 systemd-logind[1439]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:12:30.531409 systemd[1]: sshd@23-10.128.0.50:22-147.75.109.163:59662.service: Deactivated successfully. Sep 13 00:12:30.536532 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:12:30.538527 systemd-logind[1439]: Removed session 23.