Sep 4 00:05:06.338023 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 3 22:05:39 -00 2025 Sep 4 00:05:06.338082 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:05:06.338111 kernel: BIOS-provided physical RAM map: Sep 4 00:05:06.338127 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 4 00:05:06.338176 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 4 00:05:06.338194 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 4 00:05:06.338225 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 4 00:05:06.338245 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 4 00:05:06.338263 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd329fff] usable Sep 4 00:05:06.338282 kernel: BIOS-e820: [mem 0x00000000bd32a000-0x00000000bd331fff] ACPI data Sep 4 00:05:06.338300 kernel: BIOS-e820: [mem 0x00000000bd332000-0x00000000bf8ecfff] usable Sep 4 00:05:06.338320 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Sep 4 00:05:06.338339 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 4 00:05:06.338359 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 4 00:05:06.338387 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 4 00:05:06.338405 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 4 00:05:06.338427 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 4 00:05:06.338443 kernel: NX (Execute Disable) protection: active Sep 4 00:05:06.338464 kernel: APIC: Static calls initialized Sep 4 00:05:06.338481 kernel: efi: EFI v2.7 by EDK II Sep 4 00:05:06.338500 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32a018 Sep 4 00:05:06.338525 kernel: random: crng init done Sep 4 00:05:06.338546 kernel: secureboot: Secure boot disabled Sep 4 00:05:06.338565 kernel: SMBIOS 2.4 present. Sep 4 00:05:06.338582 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 4 00:05:06.338599 kernel: DMI: Memory slots populated: 1/1 Sep 4 00:05:06.338617 kernel: Hypervisor detected: KVM Sep 4 00:05:06.338638 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 00:05:06.338661 kernel: kvm-clock: using sched offset of 16300822369 cycles Sep 4 00:05:06.338683 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 00:05:06.338708 kernel: tsc: Detected 2299.998 MHz processor Sep 4 00:05:06.338729 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 00:05:06.338755 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 00:05:06.338772 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 4 00:05:06.338792 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 4 00:05:06.338811 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 00:05:06.338828 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 4 00:05:06.338844 kernel: Using GB pages for direct mapping Sep 4 00:05:06.338861 kernel: ACPI: Early table checksum verification disabled Sep 4 00:05:06.338886 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 4 00:05:06.338915 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 4 00:05:06.338938 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 4 00:05:06.338962 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 4 00:05:06.338982 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 4 00:05:06.339010 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 4 00:05:06.339037 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 4 00:05:06.339071 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 4 00:05:06.339101 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 4 00:05:06.339149 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 4 00:05:06.339191 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 4 00:05:06.339209 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 4 00:05:06.339227 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 4 00:05:06.339244 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 4 00:05:06.339262 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 4 00:05:06.339292 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 4 00:05:06.339316 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 4 00:05:06.339334 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 4 00:05:06.339352 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 4 00:05:06.339370 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 4 00:05:06.339388 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 4 00:05:06.339406 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 4 00:05:06.339424 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 4 00:05:06.339442 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Sep 4 00:05:06.339460 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Sep 4 00:05:06.339482 kernel: NODE_DATA(0) allocated [mem 0x21fff6dc0-0x21fffdfff] Sep 4 00:05:06.339500 kernel: Zone ranges: Sep 4 00:05:06.339518 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 00:05:06.339535 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 00:05:06.339553 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 4 00:05:06.339570 kernel: Device empty Sep 4 00:05:06.339588 kernel: Movable zone start for each node Sep 4 00:05:06.339606 kernel: Early memory node ranges Sep 4 00:05:06.339623 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 4 00:05:06.339648 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 4 00:05:06.339666 kernel: node 0: [mem 0x0000000000100000-0x00000000bd329fff] Sep 4 00:05:06.339683 kernel: node 0: [mem 0x00000000bd332000-0x00000000bf8ecfff] Sep 4 00:05:06.339701 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 4 00:05:06.339719 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 4 00:05:06.339737 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 4 00:05:06.339754 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 00:05:06.339772 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 4 00:05:06.339790 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 4 00:05:06.339811 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Sep 4 00:05:06.339829 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 4 00:05:06.339847 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 4 00:05:06.339872 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 00:05:06.339890 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 00:05:06.339908 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 00:05:06.339926 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 00:05:06.339944 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 00:05:06.339962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 00:05:06.339984 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 00:05:06.340002 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 00:05:06.340020 kernel: CPU topo: Max. logical packages: 1 Sep 4 00:05:06.340037 kernel: CPU topo: Max. logical dies: 1 Sep 4 00:05:06.340055 kernel: CPU topo: Max. dies per package: 1 Sep 4 00:05:06.340072 kernel: CPU topo: Max. threads per core: 2 Sep 4 00:05:06.340089 kernel: CPU topo: Num. cores per package: 1 Sep 4 00:05:06.340107 kernel: CPU topo: Num. threads per package: 2 Sep 4 00:05:06.340124 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 4 00:05:06.340609 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 4 00:05:06.340636 kernel: Booting paravirtualized kernel on KVM Sep 4 00:05:06.340655 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 00:05:06.340674 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 00:05:06.340692 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 4 00:05:06.340710 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 4 00:05:06.340727 kernel: pcpu-alloc: [0] 0 1 Sep 4 00:05:06.340746 kernel: kvm-guest: PV spinlocks enabled Sep 4 00:05:06.340764 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 00:05:06.340788 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:05:06.340807 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 00:05:06.340824 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 00:05:06.340844 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 00:05:06.340861 kernel: Fallback order for Node 0: 0 Sep 4 00:05:06.340891 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Sep 4 00:05:06.340909 kernel: Policy zone: Normal Sep 4 00:05:06.340926 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 00:05:06.340944 kernel: software IO TLB: area num 2. Sep 4 00:05:06.340979 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 00:05:06.340999 kernel: Kernel/User page tables isolation: enabled Sep 4 00:05:06.341023 kernel: ftrace: allocating 40099 entries in 157 pages Sep 4 00:05:06.341041 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 00:05:06.341060 kernel: Dynamic Preempt: voluntary Sep 4 00:05:06.341079 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 00:05:06.341100 kernel: rcu: RCU event tracing is enabled. Sep 4 00:05:06.341119 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 00:05:06.341183 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 00:05:06.341207 kernel: Rude variant of Tasks RCU enabled. Sep 4 00:05:06.341226 kernel: Tracing variant of Tasks RCU enabled. Sep 4 00:05:06.341244 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 00:05:06.341265 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 00:05:06.341284 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:05:06.341303 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:05:06.341322 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:05:06.341345 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 00:05:06.341364 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 00:05:06.341383 kernel: Console: colour dummy device 80x25 Sep 4 00:05:06.341402 kernel: printk: legacy console [ttyS0] enabled Sep 4 00:05:06.341422 kernel: ACPI: Core revision 20240827 Sep 4 00:05:06.341441 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 00:05:06.341460 kernel: x2apic enabled Sep 4 00:05:06.341478 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 00:05:06.341497 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 4 00:05:06.341520 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 4 00:05:06.341540 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 4 00:05:06.341559 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 4 00:05:06.341577 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 4 00:05:06.341596 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 00:05:06.341615 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Sep 4 00:05:06.341635 kernel: Spectre V2 : Mitigation: IBRS Sep 4 00:05:06.341656 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 00:05:06.341674 kernel: RETBleed: Mitigation: IBRS Sep 4 00:05:06.341697 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 00:05:06.341717 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 4 00:05:06.341736 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 00:05:06.341755 kernel: MDS: Mitigation: Clear CPU buffers Sep 4 00:05:06.341773 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 00:05:06.341792 kernel: active return thunk: its_return_thunk Sep 4 00:05:06.341811 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 4 00:05:06.341830 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 00:05:06.341849 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 00:05:06.341878 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 00:05:06.341898 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 00:05:06.341917 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 4 00:05:06.341937 kernel: Freeing SMP alternatives memory: 32K Sep 4 00:05:06.341956 kernel: pid_max: default: 32768 minimum: 301 Sep 4 00:05:06.341975 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 00:05:06.341994 kernel: landlock: Up and running. Sep 4 00:05:06.342013 kernel: SELinux: Initializing. Sep 4 00:05:06.342033 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 00:05:06.342055 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 00:05:06.342074 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 4 00:05:06.342093 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 4 00:05:06.342112 kernel: signal: max sigframe size: 1776 Sep 4 00:05:06.342147 kernel: rcu: Hierarchical SRCU implementation. Sep 4 00:05:06.342217 kernel: rcu: Max phase no-delay instances is 400. Sep 4 00:05:06.342236 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 00:05:06.342254 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 00:05:06.342274 kernel: smp: Bringing up secondary CPUs ... Sep 4 00:05:06.342297 kernel: smpboot: x86: Booting SMP configuration: Sep 4 00:05:06.342317 kernel: .... node #0, CPUs: #1 Sep 4 00:05:06.342337 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 4 00:05:06.342358 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 00:05:06.342376 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 00:05:06.342395 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 4 00:05:06.342415 kernel: Memory: 7566320K/7860552K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 288664K reserved, 0K cma-reserved) Sep 4 00:05:06.342434 kernel: devtmpfs: initialized Sep 4 00:05:06.342458 kernel: x86/mm: Memory block size: 128MB Sep 4 00:05:06.342477 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 4 00:05:06.342496 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 00:05:06.342515 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 00:05:06.342533 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 00:05:06.342554 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 00:05:06.342572 kernel: audit: initializing netlink subsys (disabled) Sep 4 00:05:06.342591 kernel: audit: type=2000 audit(1756944300.725:1): state=initialized audit_enabled=0 res=1 Sep 4 00:05:06.342611 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 00:05:06.342634 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 00:05:06.342652 kernel: cpuidle: using governor menu Sep 4 00:05:06.342671 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 00:05:06.342690 kernel: dca service started, version 1.12.1 Sep 4 00:05:06.342708 kernel: PCI: Using configuration type 1 for base access Sep 4 00:05:06.342728 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 00:05:06.342747 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 00:05:06.342765 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 00:05:06.342784 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 00:05:06.342807 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 00:05:06.342827 kernel: ACPI: Added _OSI(Module Device) Sep 4 00:05:06.342846 kernel: ACPI: Added _OSI(Processor Device) Sep 4 00:05:06.342871 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 00:05:06.342890 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 4 00:05:06.342909 kernel: ACPI: Interpreter enabled Sep 4 00:05:06.342928 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 00:05:06.342946 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 00:05:06.342966 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 00:05:06.342988 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 00:05:06.343007 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 4 00:05:06.343025 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 00:05:06.343395 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 00:05:06.343656 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 00:05:06.344006 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 00:05:06.344045 kernel: PCI host bridge to bus 0000:00 Sep 4 00:05:06.344321 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 00:05:06.344546 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 00:05:06.344761 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 00:05:06.345009 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 4 00:05:06.345242 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 00:05:06.345474 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 4 00:05:06.345696 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Sep 4 00:05:06.345933 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 4 00:05:06.346171 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 00:05:06.346394 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Sep 4 00:05:06.346625 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Sep 4 00:05:06.346835 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Sep 4 00:05:06.347100 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 4 00:05:06.347391 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Sep 4 00:05:06.347632 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Sep 4 00:05:06.347898 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 00:05:06.350232 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Sep 4 00:05:06.350469 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Sep 4 00:05:06.350494 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 00:05:06.350514 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 00:05:06.350541 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 00:05:06.350560 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 00:05:06.350580 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 00:05:06.350598 kernel: iommu: Default domain type: Translated Sep 4 00:05:06.350619 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 00:05:06.350638 kernel: efivars: Registered efivars operations Sep 4 00:05:06.350657 kernel: PCI: Using ACPI for IRQ routing Sep 4 00:05:06.350676 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 00:05:06.350695 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 4 00:05:06.350718 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 4 00:05:06.350737 kernel: e820: reserve RAM buffer [mem 0xbd32a000-0xbfffffff] Sep 4 00:05:06.350756 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 4 00:05:06.350775 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 4 00:05:06.350794 kernel: vgaarb: loaded Sep 4 00:05:06.350814 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 00:05:06.350833 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 00:05:06.350852 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 00:05:06.350879 kernel: pnp: PnP ACPI init Sep 4 00:05:06.350902 kernel: pnp: PnP ACPI: found 7 devices Sep 4 00:05:06.350922 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 00:05:06.350941 kernel: NET: Registered PF_INET protocol family Sep 4 00:05:06.350960 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 00:05:06.350979 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 00:05:06.350999 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 00:05:06.351018 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 00:05:06.351037 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 00:05:06.351062 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 00:05:06.351082 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 00:05:06.351101 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 00:05:06.351120 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 00:05:06.351170 kernel: NET: Registered PF_XDP protocol family Sep 4 00:05:06.351405 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 00:05:06.351630 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 00:05:06.351852 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 00:05:06.352084 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 4 00:05:06.353206 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 00:05:06.353241 kernel: PCI: CLS 0 bytes, default 64 Sep 4 00:05:06.353264 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 00:05:06.353283 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 4 00:05:06.353302 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 00:05:06.353321 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 4 00:05:06.353341 kernel: clocksource: Switched to clocksource tsc Sep 4 00:05:06.353366 kernel: Initialise system trusted keyrings Sep 4 00:05:06.353386 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 00:05:06.353406 kernel: Key type asymmetric registered Sep 4 00:05:06.353425 kernel: Asymmetric key parser 'x509' registered Sep 4 00:05:06.353444 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 00:05:06.353465 kernel: io scheduler mq-deadline registered Sep 4 00:05:06.353484 kernel: io scheduler kyber registered Sep 4 00:05:06.353503 kernel: io scheduler bfq registered Sep 4 00:05:06.353522 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 00:05:06.353542 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 00:05:06.353768 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 4 00:05:06.353793 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 4 00:05:06.354013 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 4 00:05:06.354039 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 00:05:06.354272 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 4 00:05:06.354297 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 00:05:06.354318 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 00:05:06.354337 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 00:05:06.354362 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 4 00:05:06.354383 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 4 00:05:06.354606 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 4 00:05:06.354634 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 00:05:06.354654 kernel: i8042: Warning: Keylock active Sep 4 00:05:06.354674 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 00:05:06.354693 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 00:05:06.354912 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 4 00:05:06.355113 kernel: rtc_cmos 00:00: registered as rtc0 Sep 4 00:05:06.357974 kernel: rtc_cmos 00:00: setting system clock to 2025-09-04T00:05:05 UTC (1756944305) Sep 4 00:05:06.358233 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 4 00:05:06.358259 kernel: intel_pstate: CPU model not supported Sep 4 00:05:06.358277 kernel: pstore: Using crash dump compression: deflate Sep 4 00:05:06.358294 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 00:05:06.358312 kernel: NET: Registered PF_INET6 protocol family Sep 4 00:05:06.358329 kernel: Segment Routing with IPv6 Sep 4 00:05:06.358352 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 00:05:06.358369 kernel: NET: Registered PF_PACKET protocol family Sep 4 00:05:06.358387 kernel: Key type dns_resolver registered Sep 4 00:05:06.358404 kernel: IPI shorthand broadcast: enabled Sep 4 00:05:06.358422 kernel: sched_clock: Marking stable (4263009329, 995936414)->(5647820176, -388874433) Sep 4 00:05:06.358440 kernel: registered taskstats version 1 Sep 4 00:05:06.358460 kernel: Loading compiled-in X.509 certificates Sep 4 00:05:06.358477 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 247a8159a15e16f8eb89737aa66cd9cf9bbb3c10' Sep 4 00:05:06.358495 kernel: Demotion targets for Node 0: null Sep 4 00:05:06.358516 kernel: Key type .fscrypt registered Sep 4 00:05:06.358533 kernel: Key type fscrypt-provisioning registered Sep 4 00:05:06.358566 kernel: ima: Allocated hash algorithm: sha1 Sep 4 00:05:06.358584 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 00:05:06.358602 kernel: ima: No architecture policies found Sep 4 00:05:06.358619 kernel: clk: Disabling unused clocks Sep 4 00:05:06.358638 kernel: Warning: unable to open an initial console. Sep 4 00:05:06.358657 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 4 00:05:06.358675 kernel: Write protecting the kernel read-only data: 24576k Sep 4 00:05:06.358697 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 4 00:05:06.358715 kernel: Run /init as init process Sep 4 00:05:06.358733 kernel: with arguments: Sep 4 00:05:06.358751 kernel: /init Sep 4 00:05:06.358768 kernel: with environment: Sep 4 00:05:06.358786 kernel: HOME=/ Sep 4 00:05:06.358805 kernel: TERM=linux Sep 4 00:05:06.358823 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 00:05:06.358843 systemd[1]: Successfully made /usr/ read-only. Sep 4 00:05:06.358883 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:05:06.358903 systemd[1]: Detected virtualization google. Sep 4 00:05:06.358921 systemd[1]: Detected architecture x86-64. Sep 4 00:05:06.358940 systemd[1]: Running in initrd. Sep 4 00:05:06.358958 systemd[1]: No hostname configured, using default hostname. Sep 4 00:05:06.358978 systemd[1]: Hostname set to . Sep 4 00:05:06.359001 systemd[1]: Initializing machine ID from random generator. Sep 4 00:05:06.359022 systemd[1]: Queued start job for default target initrd.target. Sep 4 00:05:06.359065 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:05:06.359090 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:05:06.359115 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 00:05:06.359345 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:05:06.359373 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 00:05:06.359402 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 00:05:06.359426 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 00:05:06.359449 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 00:05:06.359470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:05:06.359496 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:05:06.359521 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:05:06.359552 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:05:06.359590 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:05:06.359615 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:05:06.359638 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:05:06.359662 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:05:06.359688 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 00:05:06.359710 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 00:05:06.359734 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:05:06.359760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:05:06.359791 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:05:06.359816 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:05:06.359853 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 00:05:06.359890 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:05:06.359914 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 00:05:06.359940 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 00:05:06.359965 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 00:05:06.359989 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:05:06.360021 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:05:06.360045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:05:06.360071 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 00:05:06.360097 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:05:06.360123 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 00:05:06.360184 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 00:05:06.360265 systemd-journald[207]: Collecting audit messages is disabled. Sep 4 00:05:06.360328 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:05:06.360359 systemd-journald[207]: Journal started Sep 4 00:05:06.360407 systemd-journald[207]: Runtime Journal (/run/log/journal/2db640b030454387ab43e93766deb74c) is 8M, max 148.9M, 140.9M free. Sep 4 00:05:06.331651 systemd-modules-load[208]: Inserted module 'overlay' Sep 4 00:05:06.368869 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:05:06.374447 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:05:06.388508 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 00:05:06.392838 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 00:05:06.395767 systemd-modules-load[208]: Inserted module 'br_netfilter' Sep 4 00:05:06.397598 kernel: Bridge firewalling registered Sep 4 00:05:06.397319 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:05:06.403414 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:05:06.410747 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:05:06.421386 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:05:06.438862 systemd-tmpfiles[225]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 00:05:06.448020 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:05:06.457289 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:05:06.465311 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:05:06.474696 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:05:06.478259 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 00:05:06.493461 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:05:06.518004 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:05:06.580345 systemd-resolved[245]: Positive Trust Anchors: Sep 4 00:05:06.580811 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:05:06.580892 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:05:06.588263 systemd-resolved[245]: Defaulting to hostname 'linux'. Sep 4 00:05:06.590461 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:05:06.603441 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:05:06.677187 kernel: SCSI subsystem initialized Sep 4 00:05:06.691174 kernel: Loading iSCSI transport class v2.0-870. Sep 4 00:05:06.705177 kernel: iscsi: registered transport (tcp) Sep 4 00:05:06.735572 kernel: iscsi: registered transport (qla4xxx) Sep 4 00:05:06.735667 kernel: QLogic iSCSI HBA Driver Sep 4 00:05:06.762285 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:05:06.779611 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:05:06.788330 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:05:06.859743 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 00:05:06.868861 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 00:05:06.934189 kernel: raid6: avx2x4 gen() 17333 MB/s Sep 4 00:05:06.951185 kernel: raid6: avx2x2 gen() 17056 MB/s Sep 4 00:05:06.969099 kernel: raid6: avx2x1 gen() 12975 MB/s Sep 4 00:05:06.969227 kernel: raid6: using algorithm avx2x4 gen() 17333 MB/s Sep 4 00:05:06.986948 kernel: raid6: .... xor() 7647 MB/s, rmw enabled Sep 4 00:05:06.987033 kernel: raid6: using avx2x2 recovery algorithm Sep 4 00:05:07.013178 kernel: xor: automatically using best checksumming function avx Sep 4 00:05:07.219178 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 00:05:07.229012 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:05:07.234088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:05:07.270544 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 4 00:05:07.280710 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:05:07.286453 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 00:05:07.323436 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Sep 4 00:05:07.361403 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:05:07.369639 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:05:07.478584 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:05:07.486233 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 00:05:07.614156 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 00:05:07.626172 kernel: AES CTR mode by8 optimization enabled Sep 4 00:05:07.630159 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Sep 4 00:05:07.761495 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 00:05:07.815577 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:05:07.819608 kernel: scsi host0: Virtio SCSI HBA Sep 4 00:05:07.816052 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:05:07.839711 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 4 00:05:07.839581 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:05:07.847033 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:05:07.849459 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:05:07.885099 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 4 00:05:07.885555 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 4 00:05:07.885879 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 4 00:05:07.889164 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 4 00:05:07.889553 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 4 00:05:07.894820 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:05:07.906034 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 00:05:07.906161 kernel: GPT:17805311 != 25165823 Sep 4 00:05:07.906195 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 00:05:07.906226 kernel: GPT:17805311 != 25165823 Sep 4 00:05:07.907006 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 00:05:07.908710 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 00:05:07.911173 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 4 00:05:08.021285 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 4 00:05:08.022183 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 00:05:08.041891 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 4 00:05:08.065919 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 4 00:05:08.068168 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 4 00:05:08.085295 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 4 00:05:08.085845 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:05:08.093277 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:05:08.098334 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:05:08.105357 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 00:05:08.113253 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 00:05:08.131193 disk-uuid[607]: Primary Header is updated. Sep 4 00:05:08.131193 disk-uuid[607]: Secondary Entries is updated. Sep 4 00:05:08.131193 disk-uuid[607]: Secondary Header is updated. Sep 4 00:05:08.145606 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:05:08.152326 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 00:05:09.187164 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 00:05:09.187269 disk-uuid[608]: The operation has completed successfully. Sep 4 00:05:09.290639 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 00:05:09.290830 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 00:05:09.354668 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 00:05:09.391764 sh[629]: Success Sep 4 00:05:09.419217 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 00:05:09.419351 kernel: device-mapper: uevent: version 1.0.3 Sep 4 00:05:09.419727 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 00:05:09.436172 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 4 00:05:09.548653 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 00:05:09.554572 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 00:05:09.576059 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 00:05:09.605195 kernel: BTRFS: device fsid 8a9c2e34-3d3c-49a9-acce-59bf90003071 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (641) Sep 4 00:05:09.609312 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9c2e34-3d3c-49a9-acce-59bf90003071 Sep 4 00:05:09.609421 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:05:09.634411 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 00:05:09.634580 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 00:05:09.634624 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 00:05:09.640681 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 00:05:09.646236 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:05:09.646648 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 00:05:09.648928 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 00:05:09.661456 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 00:05:09.717261 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (676) Sep 4 00:05:09.720464 kernel: BTRFS info (device sda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:05:09.720570 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:05:09.730330 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 00:05:09.730473 kernel: BTRFS info (device sda6): turning on async discard Sep 4 00:05:09.730522 kernel: BTRFS info (device sda6): enabling free space tree Sep 4 00:05:09.740202 kernel: BTRFS info (device sda6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:05:09.742602 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 00:05:09.752409 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 00:05:09.903796 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:05:09.925401 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:05:10.044754 systemd-networkd[810]: lo: Link UP Sep 4 00:05:10.044779 systemd-networkd[810]: lo: Gained carrier Sep 4 00:05:10.049237 systemd-networkd[810]: Enumeration completed Sep 4 00:05:10.051032 ignition[735]: Ignition 2.21.0 Sep 4 00:05:10.050311 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:05:10.051045 ignition[735]: Stage: fetch-offline Sep 4 00:05:10.050509 systemd-networkd[810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:05:10.051099 ignition[735]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:05:10.050517 systemd-networkd[810]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:05:10.051126 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 00:05:10.053321 systemd-networkd[810]: eth0: Link UP Sep 4 00:05:10.052235 ignition[735]: parsed url from cmdline: "" Sep 4 00:05:10.054257 systemd-networkd[810]: eth0: Gained carrier Sep 4 00:05:10.054204 ignition[735]: no config URL provided Sep 4 00:05:10.054283 systemd-networkd[810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:05:10.054223 ignition[735]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 00:05:10.055814 systemd[1]: Reached target network.target - Network. Sep 4 00:05:10.054242 ignition[735]: no config at "/usr/lib/ignition/user.ign" Sep 4 00:05:10.061934 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:05:10.054258 ignition[735]: failed to fetch config: resource requires networking Sep 4 00:05:10.069837 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 00:05:10.054569 ignition[735]: Ignition finished successfully Sep 4 00:05:10.071334 systemd-networkd[810]: eth0: Overlong DHCP hostname received, shortened from 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc.c.flatcar-212911.internal' to 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' Sep 4 00:05:10.071363 systemd-networkd[810]: eth0: DHCPv4 address 10.128.0.26/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 4 00:05:10.130852 ignition[819]: Ignition 2.21.0 Sep 4 00:05:10.130870 ignition[819]: Stage: fetch Sep 4 00:05:10.131235 ignition[819]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:05:10.131254 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 00:05:10.131494 ignition[819]: parsed url from cmdline: "" Sep 4 00:05:10.131503 ignition[819]: no config URL provided Sep 4 00:05:10.131511 ignition[819]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 00:05:10.131523 ignition[819]: no config at "/usr/lib/ignition/user.ign" Sep 4 00:05:10.131567 ignition[819]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 4 00:05:10.152993 unknown[819]: fetched base config from "system" Sep 4 00:05:10.136250 ignition[819]: GET result: OK Sep 4 00:05:10.153003 unknown[819]: fetched base config from "system" Sep 4 00:05:10.136391 ignition[819]: parsing config with SHA512: d27110925a7f83609580df5db88f7f0af4b06be3b85c8e39c202bc7ada08113bbf3d8571c8ff330f685ecb8f1f872a4500769977d65e307e542e1c038c79a4d9 Sep 4 00:05:10.153010 unknown[819]: fetched user config from "gcp" Sep 4 00:05:10.154480 ignition[819]: fetch: fetch complete Sep 4 00:05:10.158852 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 00:05:10.154493 ignition[819]: fetch: fetch passed Sep 4 00:05:10.163873 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 00:05:10.154619 ignition[819]: Ignition finished successfully Sep 4 00:05:10.208747 ignition[827]: Ignition 2.21.0 Sep 4 00:05:10.208771 ignition[827]: Stage: kargs Sep 4 00:05:10.209101 ignition[827]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:05:10.217066 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 00:05:10.209121 ignition[827]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 00:05:10.212466 ignition[827]: kargs: kargs passed Sep 4 00:05:10.212600 ignition[827]: Ignition finished successfully Sep 4 00:05:10.225560 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 00:05:10.269885 ignition[834]: Ignition 2.21.0 Sep 4 00:05:10.269907 ignition[834]: Stage: disks Sep 4 00:05:10.270262 ignition[834]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:05:10.274808 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 00:05:10.270296 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 00:05:10.275976 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 00:05:10.271692 ignition[834]: disks: disks passed Sep 4 00:05:10.280742 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 00:05:10.271761 ignition[834]: Ignition finished successfully Sep 4 00:05:10.284284 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:05:10.292650 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:05:10.295095 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:05:10.303557 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 00:05:10.358805 systemd-fsck[843]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 4 00:05:10.370101 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 00:05:10.376898 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 00:05:10.573163 kernel: EXT4-fs (sda9): mounted filesystem c3518c93-f823-4477-a620-ff9666a59be5 r/w with ordered data mode. Quota mode: none. Sep 4 00:05:10.574863 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 00:05:10.576151 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 00:05:10.581652 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:05:10.597612 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 00:05:10.602702 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 00:05:10.604020 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 00:05:10.606383 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:05:10.620178 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (851) Sep 4 00:05:10.624761 kernel: BTRFS info (device sda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:05:10.624845 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:05:10.626628 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 00:05:10.635560 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 00:05:10.635673 kernel: BTRFS info (device sda6): turning on async discard Sep 4 00:05:10.635725 kernel: BTRFS info (device sda6): enabling free space tree Sep 4 00:05:10.633171 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 00:05:10.643767 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:05:10.753021 initrd-setup-root[875]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 00:05:10.762859 initrd-setup-root[882]: cut: /sysroot/etc/group: No such file or directory Sep 4 00:05:10.773970 initrd-setup-root[889]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 00:05:10.784195 initrd-setup-root[896]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 00:05:10.951765 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 00:05:10.959205 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 00:05:10.963712 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 00:05:10.994914 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 00:05:10.996904 kernel: BTRFS info (device sda6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:05:11.033354 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 00:05:11.040933 ignition[963]: INFO : Ignition 2.21.0 Sep 4 00:05:11.040933 ignition[963]: INFO : Stage: mount Sep 4 00:05:11.047295 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:05:11.047295 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 00:05:11.047295 ignition[963]: INFO : mount: mount passed Sep 4 00:05:11.047295 ignition[963]: INFO : Ignition finished successfully Sep 4 00:05:11.048745 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 00:05:11.057323 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 00:05:11.096392 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:05:11.141187 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (977) Sep 4 00:05:11.144641 kernel: BTRFS info (device sda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:05:11.144738 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:05:11.152181 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 00:05:11.152301 kernel: BTRFS info (device sda6): turning on async discard Sep 4 00:05:11.152334 kernel: BTRFS info (device sda6): enabling free space tree Sep 4 00:05:11.156373 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:05:11.194963 ignition[994]: INFO : Ignition 2.21.0 Sep 4 00:05:11.194963 ignition[994]: INFO : Stage: files Sep 4 00:05:11.200634 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:05:11.200634 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 00:05:11.200634 ignition[994]: DEBUG : files: compiled without relabeling support, skipping Sep 4 00:05:11.212396 ignition[994]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 00:05:11.212396 ignition[994]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 00:05:11.212396 ignition[994]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 00:05:11.212396 ignition[994]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 00:05:11.212396 ignition[994]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 00:05:11.208267 unknown[994]: wrote ssh authorized keys file for user: core Sep 4 00:05:11.237320 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 00:05:11.237320 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 4 00:05:11.342092 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 00:05:11.650315 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 00:05:11.654551 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 00:05:11.654551 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 00:05:11.654551 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:05:11.654551 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:05:11.654551 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:05:11.654551 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:05:11.654551 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:05:11.654551 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:05:11.691414 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:05:11.691414 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:05:11.691414 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:05:11.691414 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:05:11.691414 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:05:11.691414 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 4 00:05:11.752510 systemd-networkd[810]: eth0: Gained IPv6LL Sep 4 00:05:12.149550 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 00:05:12.907801 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:05:12.907801 ignition[994]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 00:05:12.918329 ignition[994]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:05:12.918329 ignition[994]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:05:12.918329 ignition[994]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 00:05:12.918329 ignition[994]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 00:05:12.918329 ignition[994]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 00:05:12.918329 ignition[994]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:05:12.918329 ignition[994]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:05:12.918329 ignition[994]: INFO : files: files passed Sep 4 00:05:12.918329 ignition[994]: INFO : Ignition finished successfully Sep 4 00:05:12.921664 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 00:05:12.933482 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 00:05:12.943377 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 00:05:12.971023 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 00:05:12.972526 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 00:05:12.985485 initrd-setup-root-after-ignition[1024]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:05:12.985485 initrd-setup-root-after-ignition[1024]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:05:12.994416 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:05:12.992079 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:05:12.996430 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 00:05:13.003976 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 00:05:13.084422 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 00:05:13.084626 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 00:05:13.091369 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 00:05:13.097419 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 00:05:13.101505 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 00:05:13.103507 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 00:05:13.142284 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:05:13.146461 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 00:05:13.179339 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:05:13.183589 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:05:13.192786 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 00:05:13.197056 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 00:05:13.197936 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:05:13.205026 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 00:05:13.208218 systemd[1]: Stopped target basic.target - Basic System. Sep 4 00:05:13.213095 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 00:05:13.218167 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:05:13.224594 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 00:05:13.230066 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:05:13.237362 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 00:05:13.243050 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:05:13.251764 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 00:05:13.256028 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 00:05:13.261234 systemd[1]: Stopped target swap.target - Swaps. Sep 4 00:05:13.266221 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 00:05:13.266822 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:05:13.275894 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:05:13.279377 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:05:13.283738 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 00:05:13.284450 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:05:13.288775 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 00:05:13.289401 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 00:05:13.300077 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 00:05:13.300684 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:05:13.303480 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 00:05:13.303905 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 00:05:13.311787 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 00:05:13.322374 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 00:05:13.322750 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:05:13.341837 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 00:05:13.346397 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 00:05:13.346750 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:05:13.352360 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 00:05:13.352634 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:05:13.370764 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 00:05:13.372413 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 00:05:13.377288 ignition[1048]: INFO : Ignition 2.21.0 Sep 4 00:05:13.377288 ignition[1048]: INFO : Stage: umount Sep 4 00:05:13.377288 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:05:13.377288 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 00:05:13.390337 ignition[1048]: INFO : umount: umount passed Sep 4 00:05:13.390337 ignition[1048]: INFO : Ignition finished successfully Sep 4 00:05:13.387092 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 00:05:13.387292 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 00:05:13.400464 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 00:05:13.400607 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 00:05:13.407518 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 00:05:13.407658 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 00:05:13.412928 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 00:05:13.413548 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 00:05:13.420770 systemd[1]: Stopped target network.target - Network. Sep 4 00:05:13.428409 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 00:05:13.428600 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:05:13.436523 systemd[1]: Stopped target paths.target - Path Units. Sep 4 00:05:13.441371 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 00:05:13.445486 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:05:13.447734 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 00:05:13.452675 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 00:05:13.455888 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 00:05:13.456337 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:05:13.461568 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 00:05:13.461657 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:05:13.466797 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 00:05:13.466922 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 00:05:13.472883 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 00:05:13.473028 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 00:05:13.478289 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 00:05:13.486842 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 00:05:13.496249 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 00:05:13.497662 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 00:05:13.497894 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 00:05:13.504263 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 00:05:13.504631 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 00:05:13.504829 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 00:05:13.516102 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 00:05:13.516888 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 00:05:13.517123 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 00:05:13.528187 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 00:05:13.533007 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 00:05:13.533085 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:05:13.538849 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 00:05:13.539324 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 00:05:13.548712 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 00:05:13.559395 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 00:05:13.559885 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:05:13.569472 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 00:05:13.569607 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:05:13.574982 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 00:05:13.575128 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 00:05:13.579950 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 00:05:13.580089 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:05:13.590853 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:05:13.601330 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 00:05:13.601459 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:05:13.613615 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 00:05:13.613988 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:05:13.626589 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 00:05:13.626778 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 00:05:13.631901 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 00:05:13.631985 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:05:13.638513 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 00:05:13.638702 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:05:13.646841 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 00:05:13.647008 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 00:05:13.659327 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 00:05:13.659563 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:05:13.672314 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 00:05:13.686379 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 00:05:13.686559 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:05:13.690816 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 00:05:13.690955 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:05:13.698603 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:05:13.698804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:05:13.707540 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 4 00:05:13.708086 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 00:05:13.708227 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:05:13.709298 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 00:05:13.709482 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 00:05:13.717498 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 00:05:13.717719 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 00:05:13.723323 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 00:05:13.821797 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Sep 4 00:05:13.733579 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 00:05:13.774960 systemd[1]: Switching root. Sep 4 00:05:13.828376 systemd-journald[207]: Journal stopped Sep 4 00:05:16.402022 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 00:05:16.402111 kernel: SELinux: policy capability open_perms=1 Sep 4 00:05:16.404196 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 00:05:16.404235 kernel: SELinux: policy capability always_check_network=0 Sep 4 00:05:16.404265 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 00:05:16.404295 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 00:05:16.404340 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 00:05:16.404371 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 00:05:16.404407 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 00:05:16.404434 kernel: audit: type=1403 audit(1756944314.428:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 00:05:16.404472 systemd[1]: Successfully loaded SELinux policy in 60.780ms. Sep 4 00:05:16.404509 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.295ms. Sep 4 00:05:16.404546 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:05:16.404585 systemd[1]: Detected virtualization google. Sep 4 00:05:16.404622 systemd[1]: Detected architecture x86-64. Sep 4 00:05:16.404655 systemd[1]: Detected first boot. Sep 4 00:05:16.404681 systemd[1]: Initializing machine ID from random generator. Sep 4 00:05:16.404716 zram_generator::config[1092]: No configuration found. Sep 4 00:05:16.404758 kernel: Guest personality initialized and is inactive Sep 4 00:05:16.404790 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 00:05:16.404822 kernel: Initialized host personality Sep 4 00:05:16.404854 kernel: NET: Registered PF_VSOCK protocol family Sep 4 00:05:16.404889 systemd[1]: Populated /etc with preset unit settings. Sep 4 00:05:16.404920 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 00:05:16.404956 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 00:05:16.404995 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 00:05:16.405031 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 00:05:16.405067 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 00:05:16.405102 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 00:05:16.405156 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 00:05:16.405193 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 00:05:16.405229 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 00:05:16.405270 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 00:05:16.405306 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 00:05:16.405363 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 00:05:16.405407 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:05:16.405444 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:05:16.405480 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 00:05:16.405519 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 00:05:16.405556 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 00:05:16.405603 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:05:16.405644 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 00:05:16.405673 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:05:16.405710 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:05:16.405747 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 00:05:16.405785 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 00:05:16.405822 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 00:05:16.405860 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 00:05:16.405902 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:05:16.405940 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:05:16.405978 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:05:16.406015 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:05:16.406052 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 00:05:16.406090 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 00:05:16.406123 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 00:05:16.408242 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:05:16.408290 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:05:16.408328 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:05:16.408368 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 00:05:16.408414 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 00:05:16.408452 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 00:05:16.408495 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 00:05:16.408533 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:05:16.408571 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 00:05:16.408609 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 00:05:16.408647 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 00:05:16.408685 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 00:05:16.408724 systemd[1]: Reached target machines.target - Containers. Sep 4 00:05:16.408755 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 00:05:16.408798 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:05:16.408837 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:05:16.408875 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 00:05:16.408909 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:05:16.408947 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:05:16.408988 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:05:16.409026 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 00:05:16.409063 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:05:16.409107 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 00:05:16.409160 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 00:05:16.409199 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 00:05:16.409238 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 00:05:16.409276 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 00:05:16.409316 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:05:16.409351 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:05:16.409389 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:05:16.409439 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:05:16.409475 kernel: fuse: init (API version 7.41) Sep 4 00:05:16.409511 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 00:05:16.409550 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 00:05:16.409588 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:05:16.409626 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 00:05:16.409661 systemd[1]: Stopped verity-setup.service. Sep 4 00:05:16.409700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:05:16.409736 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 00:05:16.409780 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 00:05:16.409820 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 00:05:16.409858 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 00:05:16.409901 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 00:05:16.409939 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 00:05:16.409977 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:05:16.410015 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 00:05:16.410054 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 00:05:16.410096 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:05:16.412175 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:05:16.412234 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:05:16.412273 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:05:16.412313 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 00:05:16.412351 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 00:05:16.412390 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 00:05:16.412442 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 00:05:16.412481 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 00:05:16.412528 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 00:05:16.412567 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:05:16.412609 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 00:05:16.412646 kernel: loop: module loaded Sep 4 00:05:16.412680 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 00:05:16.412719 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:05:16.412758 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 00:05:16.412817 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:05:16.412857 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 00:05:16.412897 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 00:05:16.412942 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:05:16.412982 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:05:16.413021 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:05:16.413062 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:05:16.413106 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 00:05:16.413162 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 00:05:16.413202 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 00:05:16.413309 systemd-journald[1163]: Collecting audit messages is disabled. Sep 4 00:05:16.413398 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 00:05:16.413451 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:05:16.413492 systemd-journald[1163]: Journal started Sep 4 00:05:16.413567 systemd-journald[1163]: Runtime Journal (/run/log/journal/e319ae45957b46c5b3ae3b485436cf5f) is 8M, max 148.9M, 140.9M free. Sep 4 00:05:16.424911 kernel: ACPI: bus type drm_connector registered Sep 4 00:05:15.516904 systemd[1]: Queued start job for default target multi-user.target. Sep 4 00:05:16.435585 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 00:05:16.435647 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:05:15.532099 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 00:05:15.532871 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 00:05:16.450378 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:05:16.450504 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:05:16.453970 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 00:05:16.459237 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:05:16.461231 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:05:16.469272 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 00:05:16.512859 kernel: loop0: detected capacity change from 0 to 224512 Sep 4 00:05:16.518948 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 00:05:16.542215 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 00:05:16.566027 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:05:16.599421 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 00:05:16.606322 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 00:05:16.610333 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:05:16.657687 systemd-journald[1163]: Time spent on flushing to /var/log/journal/e319ae45957b46c5b3ae3b485436cf5f is 90.367ms for 968 entries. Sep 4 00:05:16.657687 systemd-journald[1163]: System Journal (/var/log/journal/e319ae45957b46c5b3ae3b485436cf5f) is 8M, max 584.8M, 576.8M free. Sep 4 00:05:16.778045 systemd-journald[1163]: Received client request to flush runtime journal. Sep 4 00:05:16.778126 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 00:05:16.778239 kernel: loop1: detected capacity change from 0 to 113872 Sep 4 00:05:16.784719 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 00:05:16.801222 kernel: loop2: detected capacity change from 0 to 146240 Sep 4 00:05:16.826676 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 00:05:16.836050 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:05:16.903208 kernel: loop3: detected capacity change from 0 to 52072 Sep 4 00:05:16.963960 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Sep 4 00:05:16.964003 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Sep 4 00:05:17.004177 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:05:17.019185 kernel: loop4: detected capacity change from 0 to 224512 Sep 4 00:05:17.072207 kernel: loop5: detected capacity change from 0 to 113872 Sep 4 00:05:17.135424 kernel: loop6: detected capacity change from 0 to 146240 Sep 4 00:05:17.220190 kernel: loop7: detected capacity change from 0 to 52072 Sep 4 00:05:17.266312 (sd-merge)[1239]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Sep 4 00:05:17.268053 (sd-merge)[1239]: Merged extensions into '/usr'. Sep 4 00:05:17.292739 systemd[1]: Reload requested from client PID 1194 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 00:05:17.292775 systemd[1]: Reloading... Sep 4 00:05:17.429170 zram_generator::config[1261]: No configuration found. Sep 4 00:05:17.870612 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:05:17.894177 ldconfig[1188]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 00:05:18.118202 systemd[1]: Reloading finished in 824 ms. Sep 4 00:05:18.140827 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 00:05:18.151230 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 00:05:18.185453 systemd[1]: Starting ensure-sysext.service... Sep 4 00:05:18.195508 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:05:18.252315 systemd[1]: Reload requested from client PID 1305 ('systemctl') (unit ensure-sysext.service)... Sep 4 00:05:18.252353 systemd[1]: Reloading... Sep 4 00:05:18.280180 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 00:05:18.280265 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 00:05:18.280869 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 00:05:18.281472 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 00:05:18.283355 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 00:05:18.283981 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Sep 4 00:05:18.284092 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Sep 4 00:05:18.303283 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:05:18.303602 systemd-tmpfiles[1306]: Skipping /boot Sep 4 00:05:18.364296 zram_generator::config[1332]: No configuration found. Sep 4 00:05:18.433300 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:05:18.434217 systemd-tmpfiles[1306]: Skipping /boot Sep 4 00:05:18.626040 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:05:18.819125 systemd[1]: Reloading finished in 565 ms. Sep 4 00:05:18.836580 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 00:05:18.865249 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:05:18.888622 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:05:18.905647 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 00:05:18.922875 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 00:05:18.939310 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:05:18.953819 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:05:18.969044 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 00:05:18.990182 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:05:18.991035 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:05:18.997785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:05:19.012592 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:05:19.031713 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:05:19.040605 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:05:19.041351 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:05:19.041595 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:05:19.065027 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:05:19.066117 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:05:19.078057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:05:19.078471 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:05:19.097154 systemd-udevd[1385]: Using default interface naming scheme 'v255'. Sep 4 00:05:19.101234 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 00:05:19.127075 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:05:19.127927 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:05:19.135288 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:05:19.149800 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:05:19.156069 augenrules[1408]: No rules Sep 4 00:05:19.163100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:05:19.178909 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 00:05:19.187479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:05:19.187771 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:05:19.188168 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 00:05:19.208655 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 00:05:19.218357 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:05:19.225577 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:05:19.237827 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:05:19.239259 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:05:19.249625 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 00:05:19.262586 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 00:05:19.274433 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:05:19.274842 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:05:19.285409 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:05:19.285814 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:05:19.297393 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:05:19.297775 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:05:19.307389 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:05:19.307767 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:05:19.343091 systemd[1]: Finished ensure-sysext.service. Sep 4 00:05:19.374202 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 00:05:19.393715 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Sep 4 00:05:19.411847 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:05:19.421354 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:05:19.421533 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:05:19.426558 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 00:05:19.435369 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 00:05:19.473543 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 00:05:19.500221 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 00:05:19.568493 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Sep 4 00:05:19.630506 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Sep 4 00:05:19.630601 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Sep 4 00:05:19.708267 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 00:05:19.809632 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 00:05:19.994736 systemd-networkd[1457]: lo: Link UP Sep 4 00:05:19.994756 systemd-networkd[1457]: lo: Gained carrier Sep 4 00:05:20.004230 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 4 00:05:20.021498 systemd-networkd[1457]: Enumeration completed Sep 4 00:05:20.021768 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:05:20.025452 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:05:20.026195 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:05:20.028370 systemd-networkd[1457]: eth0: Link UP Sep 4 00:05:20.028875 systemd-networkd[1457]: eth0: Gained carrier Sep 4 00:05:20.029283 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:05:20.031309 systemd-resolved[1384]: Positive Trust Anchors: Sep 4 00:05:20.031337 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:05:20.031409 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:05:20.037095 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 00:05:20.041995 systemd-resolved[1384]: Defaulting to hostname 'linux'. Sep 4 00:05:20.044525 systemd-networkd[1457]: eth0: Overlong DHCP hostname received, shortened from 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc.c.flatcar-212911.internal' to 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' Sep 4 00:05:20.044561 systemd-networkd[1457]: eth0: DHCPv4 address 10.128.0.26/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 4 00:05:20.061570 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 00:05:20.072590 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:05:20.098998 systemd[1]: Reached target network.target - Network. Sep 4 00:05:20.106344 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:05:20.116406 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:05:20.127595 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 00:05:20.136412 kernel: ACPI: button: Power Button [PWRF] Sep 4 00:05:20.143562 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 00:05:20.154446 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 00:05:20.164255 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Sep 4 00:05:20.164381 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 4 00:05:20.184701 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 00:05:20.193238 kernel: ACPI: button: Sleep Button [SLPF] Sep 4 00:05:20.198627 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 00:05:20.209430 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 00:05:20.220414 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 00:05:20.220512 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:05:20.228388 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:05:20.238677 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 00:05:20.253548 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 00:05:20.266315 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 00:05:20.277010 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 00:05:20.287413 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 00:05:20.296223 kernel: EDAC MC: Ver: 3.0.0 Sep 4 00:05:20.310653 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 00:05:20.320245 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 00:05:20.334279 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 00:05:20.345776 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 00:05:20.445636 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 00:05:20.454398 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:05:20.463391 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:05:20.472481 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:05:20.472540 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:05:20.475860 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 00:05:20.491603 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 00:05:20.505914 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 00:05:20.519511 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 00:05:20.530950 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 00:05:20.548411 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 00:05:20.557301 jq[1512]: false Sep 4 00:05:20.559381 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 00:05:20.571428 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 00:05:20.600100 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 00:05:20.620691 oslogin_cache_refresh[1514]: Refreshing passwd entry cache Sep 4 00:05:20.621562 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing passwd entry cache Sep 4 00:05:20.617546 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 00:05:20.625941 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 00:05:20.631000 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 00:05:20.637582 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 00:05:20.649180 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting users, quitting Sep 4 00:05:20.649180 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:05:20.649180 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing group entry cache Sep 4 00:05:20.646599 oslogin_cache_refresh[1514]: Failure getting users, quitting Sep 4 00:05:20.646640 oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:05:20.646745 oslogin_cache_refresh[1514]: Refreshing group entry cache Sep 4 00:05:20.663754 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting groups, quitting Sep 4 00:05:20.663754 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:05:20.653908 oslogin_cache_refresh[1514]: Failure getting groups, quitting Sep 4 00:05:20.653960 oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:05:20.665619 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 00:05:20.678254 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 4 00:05:20.680573 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 00:05:20.688491 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 00:05:20.701450 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 00:05:20.732274 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 00:05:20.744323 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 00:05:20.744815 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 00:05:20.746690 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 00:05:20.747232 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 00:05:20.777889 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 00:05:20.778563 extend-filesystems[1513]: Found /dev/sda6 Sep 4 00:05:20.809905 jq[1527]: true Sep 4 00:05:20.810385 extend-filesystems[1513]: Found /dev/sda9 Sep 4 00:05:20.790478 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 00:05:20.835610 update_engine[1523]: I20250904 00:05:20.819497 1523 main.cc:92] Flatcar Update Engine starting Sep 4 00:05:20.836012 extend-filesystems[1513]: Checking size of /dev/sda9 Sep 4 00:05:20.802391 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 00:05:20.803663 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 00:05:20.850780 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 4 00:05:20.893823 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 00:05:20.920981 coreos-metadata[1509]: Sep 04 00:05:20.912 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Sep 4 00:05:20.924095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:05:20.928889 coreos-metadata[1509]: Sep 04 00:05:20.927 INFO Fetch successful Sep 4 00:05:20.928889 coreos-metadata[1509]: Sep 04 00:05:20.927 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Sep 4 00:05:20.928690 (ntainerd)[1554]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 00:05:20.931115 coreos-metadata[1509]: Sep 04 00:05:20.929 INFO Fetch successful Sep 4 00:05:20.941847 coreos-metadata[1509]: Sep 04 00:05:20.931 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Sep 4 00:05:20.957088 coreos-metadata[1509]: Sep 04 00:05:20.948 INFO Fetch successful Sep 4 00:05:20.957088 coreos-metadata[1509]: Sep 04 00:05:20.949 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Sep 4 00:05:20.957088 coreos-metadata[1509]: Sep 04 00:05:20.952 INFO Fetch successful Sep 4 00:05:20.996232 extend-filesystems[1513]: Resized partition /dev/sda9 Sep 4 00:05:21.013358 jq[1544]: true Sep 4 00:05:21.025274 extend-filesystems[1565]: resize2fs 1.47.2 (1-Jan-2025) Sep 4 00:05:21.051631 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:33:36 UTC 2025 (1): Starting Sep 4 00:05:21.051631 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 00:05:21.051631 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: ---------------------------------------------------- Sep 4 00:05:21.051631 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: ntp-4 is maintained by Network Time Foundation, Sep 4 00:05:21.051631 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 00:05:21.051631 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: corporation. Support and training for ntp-4 are Sep 4 00:05:21.051631 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: available at https://www.nwtime.org/support Sep 4 00:05:21.051631 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: ---------------------------------------------------- Sep 4 00:05:21.045183 ntpd[1518]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:33:36 UTC 2025 (1): Starting Sep 4 00:05:21.045220 ntpd[1518]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 00:05:21.045235 ntpd[1518]: ---------------------------------------------------- Sep 4 00:05:21.045249 ntpd[1518]: ntp-4 is maintained by Network Time Foundation, Sep 4 00:05:21.045262 ntpd[1518]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 00:05:21.045276 ntpd[1518]: corporation. Support and training for ntp-4 are Sep 4 00:05:21.045290 ntpd[1518]: available at https://www.nwtime.org/support Sep 4 00:05:21.045304 ntpd[1518]: ---------------------------------------------------- Sep 4 00:05:21.086340 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 4 00:05:21.098500 ntpd[1518]: proto: precision = 0.105 usec (-23) Sep 4 00:05:21.099213 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: proto: precision = 0.105 usec (-23) Sep 4 00:05:21.112711 ntpd[1518]: basedate set to 2025-08-22 Sep 4 00:05:21.117373 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: basedate set to 2025-08-22 Sep 4 00:05:21.117373 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: gps base set to 2025-08-24 (week 2381) Sep 4 00:05:21.112767 ntpd[1518]: gps base set to 2025-08-24 (week 2381) Sep 4 00:05:21.136240 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 00:05:21.157750 ntpd[1518]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 00:05:21.159521 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 00:05:21.159521 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 00:05:21.159521 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 00:05:21.159521 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: Listen normally on 3 eth0 10.128.0.26:123 Sep 4 00:05:21.159521 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: Listen normally on 4 lo [::1]:123 Sep 4 00:05:21.159521 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: bind(21) AF_INET6 fe80::4001:aff:fe80:1a%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 00:05:21.159521 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:1a%2#123 Sep 4 00:05:21.159521 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: failed to init interface for address fe80::4001:aff:fe80:1a%2 Sep 4 00:05:21.159521 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: Listening on routing socket on fd #21 for interface updates Sep 4 00:05:21.157867 ntpd[1518]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 00:05:21.158298 ntpd[1518]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 00:05:21.158367 ntpd[1518]: Listen normally on 3 eth0 10.128.0.26:123 Sep 4 00:05:21.158441 ntpd[1518]: Listen normally on 4 lo [::1]:123 Sep 4 00:05:21.158528 ntpd[1518]: bind(21) AF_INET6 fe80::4001:aff:fe80:1a%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 00:05:21.158565 ntpd[1518]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:1a%2#123 Sep 4 00:05:21.158590 ntpd[1518]: failed to init interface for address fe80::4001:aff:fe80:1a%2 Sep 4 00:05:21.158644 ntpd[1518]: Listening on routing socket on fd #21 for interface updates Sep 4 00:05:21.175432 tar[1543]: linux-amd64/LICENSE Sep 4 00:05:21.175432 tar[1543]: linux-amd64/helm Sep 4 00:05:21.189371 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 00:05:21.191991 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 00:05:21.212536 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 4 00:05:21.212677 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:05:21.212677 ntpd[1518]: 4 Sep 00:05:21 ntpd[1518]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:05:21.209661 ntpd[1518]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:05:21.209719 ntpd[1518]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:05:21.274182 extend-filesystems[1565]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 4 00:05:21.274182 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 4 00:05:21.274182 extend-filesystems[1565]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 4 00:05:21.274702 extend-filesystems[1513]: Resized filesystem in /dev/sda9 Sep 4 00:05:21.280592 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 00:05:21.282303 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 00:05:21.435961 bash[1591]: Updated "/home/core/.ssh/authorized_keys" Sep 4 00:05:21.486508 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 00:05:21.513390 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:05:21.575714 systemd[1]: Starting sshkeys.service... Sep 4 00:05:21.585014 dbus-daemon[1510]: [system] SELinux support is enabled Sep 4 00:05:21.592449 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 00:05:21.609095 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 00:05:21.610251 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 00:05:21.620414 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 00:05:21.620472 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 00:05:21.621738 systemd-logind[1522]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 00:05:21.621786 systemd-logind[1522]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 4 00:05:21.621959 systemd-logind[1522]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 00:05:21.625202 dbus-daemon[1510]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1457 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 00:05:21.626405 systemd-logind[1522]: New seat seat0. Sep 4 00:05:21.656640 update_engine[1523]: I20250904 00:05:21.652580 1523 update_check_scheduler.cc:74] Next update check in 2m53s Sep 4 00:05:21.656435 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 00:05:21.674509 systemd-networkd[1457]: eth0: Gained IPv6LL Sep 4 00:05:21.697874 dbus-daemon[1510]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 00:05:21.698314 systemd[1]: Started update-engine.service - Update Engine. Sep 4 00:05:21.709028 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 00:05:21.752477 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 00:05:21.764032 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 00:05:21.780051 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 00:05:21.796898 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:21.818551 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 00:05:21.836912 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Sep 4 00:05:21.855155 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 00:05:21.874065 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 00:05:22.001937 sshd_keygen[1541]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 00:05:22.012004 init.sh[1607]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 4 00:05:22.021381 init.sh[1607]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 4 00:05:22.026630 init.sh[1607]: + /usr/bin/google_instance_setup Sep 4 00:05:22.095128 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 00:05:22.112702 dbus-daemon[1510]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 00:05:22.123391 dbus-daemon[1510]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1609 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 00:05:22.156175 coreos-metadata[1604]: Sep 04 00:05:22.154 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 4 00:05:22.159287 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 00:05:22.164773 coreos-metadata[1604]: Sep 04 00:05:22.160 INFO Fetch failed with 404: resource not found Sep 4 00:05:22.164773 coreos-metadata[1604]: Sep 04 00:05:22.160 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 4 00:05:22.164773 coreos-metadata[1604]: Sep 04 00:05:22.161 INFO Fetch successful Sep 4 00:05:22.164773 coreos-metadata[1604]: Sep 04 00:05:22.161 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 4 00:05:22.164773 coreos-metadata[1604]: Sep 04 00:05:22.162 INFO Fetch failed with 404: resource not found Sep 4 00:05:22.164773 coreos-metadata[1604]: Sep 04 00:05:22.162 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 4 00:05:22.164773 coreos-metadata[1604]: Sep 04 00:05:22.163 INFO Fetch failed with 404: resource not found Sep 4 00:05:22.164773 coreos-metadata[1604]: Sep 04 00:05:22.163 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 4 00:05:22.164773 coreos-metadata[1604]: Sep 04 00:05:22.164 INFO Fetch successful Sep 4 00:05:22.171235 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 00:05:22.175988 unknown[1604]: wrote ssh authorized keys file for user: core Sep 4 00:05:22.196534 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 00:05:22.219538 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 00:05:22.236841 systemd[1]: Started sshd@0-10.128.0.26:22-147.75.109.163:49622.service - OpenSSH per-connection server daemon (147.75.109.163:49622). Sep 4 00:05:22.407223 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 00:05:22.408257 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 00:05:22.413905 update-ssh-keys[1633]: Updated "/home/core/.ssh/authorized_keys" Sep 4 00:05:22.420522 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 00:05:22.449287 systemd[1]: Finished sshkeys.service. Sep 4 00:05:22.459229 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 00:05:22.467197 containerd[1554]: time="2025-09-04T00:05:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 00:05:22.467363 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 00:05:22.477054 containerd[1554]: time="2025-09-04T00:05:22.473270558Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 4 00:05:22.617795 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 00:05:22.637279 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 00:05:22.638399 containerd[1554]: time="2025-09-04T00:05:22.637990236Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="21.085µs" Sep 4 00:05:22.639166 containerd[1554]: time="2025-09-04T00:05:22.639099617Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 00:05:22.641117 containerd[1554]: time="2025-09-04T00:05:22.639784230Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 00:05:22.641117 containerd[1554]: time="2025-09-04T00:05:22.640078477Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 00:05:22.641117 containerd[1554]: time="2025-09-04T00:05:22.640109597Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 00:05:22.641117 containerd[1554]: time="2025-09-04T00:05:22.640185323Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:05:22.641117 containerd[1554]: time="2025-09-04T00:05:22.640300407Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:05:22.641117 containerd[1554]: time="2025-09-04T00:05:22.640329503Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:05:22.645297 containerd[1554]: time="2025-09-04T00:05:22.645238565Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:05:22.649998 containerd[1554]: time="2025-09-04T00:05:22.648201097Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:05:22.649998 containerd[1554]: time="2025-09-04T00:05:22.648296630Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:05:22.649998 containerd[1554]: time="2025-09-04T00:05:22.648322944Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 00:05:22.655259 containerd[1554]: time="2025-09-04T00:05:22.652238271Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 00:05:22.657188 containerd[1554]: time="2025-09-04T00:05:22.656551276Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:05:22.657188 containerd[1554]: time="2025-09-04T00:05:22.656663504Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:05:22.657188 containerd[1554]: time="2025-09-04T00:05:22.656693282Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 00:05:22.657188 containerd[1554]: time="2025-09-04T00:05:22.656761771Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 00:05:22.659382 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 00:05:22.664393 containerd[1554]: time="2025-09-04T00:05:22.663084809Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 00:05:22.665762 containerd[1554]: time="2025-09-04T00:05:22.664617286Z" level=info msg="metadata content store policy set" policy=shared Sep 4 00:05:22.667973 polkitd[1631]: Started polkitd version 126 Sep 4 00:05:22.669243 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 00:05:22.688927 containerd[1554]: time="2025-09-04T00:05:22.685369867Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 00:05:22.688927 containerd[1554]: time="2025-09-04T00:05:22.686811629Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 00:05:22.688927 containerd[1554]: time="2025-09-04T00:05:22.686891494Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 00:05:22.688927 containerd[1554]: time="2025-09-04T00:05:22.686917586Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 00:05:22.688927 containerd[1554]: time="2025-09-04T00:05:22.686945015Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 00:05:22.688927 containerd[1554]: time="2025-09-04T00:05:22.686972156Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 00:05:22.688927 containerd[1554]: time="2025-09-04T00:05:22.686999457Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 00:05:22.688927 containerd[1554]: time="2025-09-04T00:05:22.687022473Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 00:05:22.688927 containerd[1554]: time="2025-09-04T00:05:22.687042033Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 00:05:22.688927 containerd[1554]: time="2025-09-04T00:05:22.687086313Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 00:05:22.688927 containerd[1554]: time="2025-09-04T00:05:22.687106517Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 00:05:22.689999 containerd[1554]: time="2025-09-04T00:05:22.688750794Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 00:05:22.692840 containerd[1554]: time="2025-09-04T00:05:22.692421442Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 00:05:22.692840 containerd[1554]: time="2025-09-04T00:05:22.692625968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 00:05:22.692840 containerd[1554]: time="2025-09-04T00:05:22.692685693Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 00:05:22.692840 containerd[1554]: time="2025-09-04T00:05:22.692767097Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 00:05:22.692840 containerd[1554]: time="2025-09-04T00:05:22.692795887Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 00:05:22.693720 containerd[1554]: time="2025-09-04T00:05:22.693187956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 00:05:22.693720 containerd[1554]: time="2025-09-04T00:05:22.693261234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 00:05:22.693720 containerd[1554]: time="2025-09-04T00:05:22.693285952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 00:05:22.693720 containerd[1554]: time="2025-09-04T00:05:22.693315171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 00:05:22.693720 containerd[1554]: time="2025-09-04T00:05:22.693389400Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 00:05:22.693720 containerd[1554]: time="2025-09-04T00:05:22.693418453Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 00:05:22.696099 containerd[1554]: time="2025-09-04T00:05:22.695544385Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 00:05:22.696099 containerd[1554]: time="2025-09-04T00:05:22.695786952Z" level=info msg="Start snapshots syncer" Sep 4 00:05:22.697484 containerd[1554]: time="2025-09-04T00:05:22.696930920Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 00:05:22.702989 containerd[1554]: time="2025-09-04T00:05:22.699445374Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 00:05:22.702989 containerd[1554]: time="2025-09-04T00:05:22.699559533Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 00:05:22.703440 containerd[1554]: time="2025-09-04T00:05:22.699726037Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 00:05:22.703440 containerd[1554]: time="2025-09-04T00:05:22.700368356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 00:05:22.703440 containerd[1554]: time="2025-09-04T00:05:22.700528471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 00:05:22.703440 containerd[1554]: time="2025-09-04T00:05:22.701165775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 00:05:22.703440 containerd[1554]: time="2025-09-04T00:05:22.701212008Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 00:05:22.703440 containerd[1554]: time="2025-09-04T00:05:22.701268276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 00:05:22.703440 containerd[1554]: time="2025-09-04T00:05:22.701296155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 00:05:22.703440 containerd[1554]: time="2025-09-04T00:05:22.701347444Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 00:05:22.703440 containerd[1554]: time="2025-09-04T00:05:22.701436027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 00:05:22.703440 containerd[1554]: time="2025-09-04T00:05:22.701468133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 00:05:22.703440 containerd[1554]: time="2025-09-04T00:05:22.701494319Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 00:05:22.703440 containerd[1554]: time="2025-09-04T00:05:22.701719056Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:05:22.703440 containerd[1554]: time="2025-09-04T00:05:22.702222632Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:05:22.703440 containerd[1554]: time="2025-09-04T00:05:22.702270948Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:05:22.704260 containerd[1554]: time="2025-09-04T00:05:22.702298966Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:05:22.704260 containerd[1554]: time="2025-09-04T00:05:22.702324070Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 00:05:22.704260 containerd[1554]: time="2025-09-04T00:05:22.702382580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 00:05:22.704260 containerd[1554]: time="2025-09-04T00:05:22.702418438Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 00:05:22.704260 containerd[1554]: time="2025-09-04T00:05:22.702460941Z" level=info msg="runtime interface created" Sep 4 00:05:22.704260 containerd[1554]: time="2025-09-04T00:05:22.702476498Z" level=info msg="created NRI interface" Sep 4 00:05:22.704260 containerd[1554]: time="2025-09-04T00:05:22.702499538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 00:05:22.704260 containerd[1554]: time="2025-09-04T00:05:22.702538989Z" level=info msg="Connect containerd service" Sep 4 00:05:22.704260 containerd[1554]: time="2025-09-04T00:05:22.702635699Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 00:05:22.710156 polkitd[1631]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 00:05:22.714538 polkitd[1631]: Loading rules from directory /run/polkit-1/rules.d Sep 4 00:05:22.714670 polkitd[1631]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 4 00:05:22.715606 containerd[1554]: time="2025-09-04T00:05:22.715540888Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:05:22.720940 polkitd[1631]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 4 00:05:22.721063 polkitd[1631]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 4 00:05:22.722858 polkitd[1631]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 00:05:22.735982 polkitd[1631]: Finished loading, compiling and executing 2 rules Sep 4 00:05:22.738012 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 00:05:22.745805 dbus-daemon[1510]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 00:05:22.758363 polkitd[1631]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 00:05:22.875086 systemd-hostnamed[1609]: Hostname set to (transient) Sep 4 00:05:22.880867 systemd-resolved[1384]: System hostname changed to 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc'. Sep 4 00:05:23.035428 sshd[1632]: Accepted publickey for core from 147.75.109.163 port 49622 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:05:23.047315 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:23.084545 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 00:05:23.098714 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 00:05:23.181042 systemd-logind[1522]: New session 1 of user core. Sep 4 00:05:23.202671 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 00:05:23.222010 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 00:05:23.280497 (systemd)[1675]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 00:05:23.289229 containerd[1554]: time="2025-09-04T00:05:23.287464522Z" level=info msg="Start subscribing containerd event" Sep 4 00:05:23.289229 containerd[1554]: time="2025-09-04T00:05:23.288860597Z" level=info msg="Start recovering state" Sep 4 00:05:23.289229 containerd[1554]: time="2025-09-04T00:05:23.289089358Z" level=info msg="Start event monitor" Sep 4 00:05:23.289229 containerd[1554]: time="2025-09-04T00:05:23.289127369Z" level=info msg="Start cni network conf syncer for default" Sep 4 00:05:23.290238 containerd[1554]: time="2025-09-04T00:05:23.289812997Z" level=info msg="Start streaming server" Sep 4 00:05:23.290238 containerd[1554]: time="2025-09-04T00:05:23.289853911Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 00:05:23.290238 containerd[1554]: time="2025-09-04T00:05:23.289871232Z" level=info msg="runtime interface starting up..." Sep 4 00:05:23.290238 containerd[1554]: time="2025-09-04T00:05:23.289883388Z" level=info msg="starting plugins..." Sep 4 00:05:23.290238 containerd[1554]: time="2025-09-04T00:05:23.289915275Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 00:05:23.295474 systemd-logind[1522]: New session c1 of user core. Sep 4 00:05:23.300151 containerd[1554]: time="2025-09-04T00:05:23.295833673Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 00:05:23.300151 containerd[1554]: time="2025-09-04T00:05:23.297697070Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 00:05:23.299524 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 00:05:23.300506 containerd[1554]: time="2025-09-04T00:05:23.300466392Z" level=info msg="containerd successfully booted in 0.836418s" Sep 4 00:05:23.804984 systemd[1675]: Queued start job for default target default.target. Sep 4 00:05:23.816377 systemd[1675]: Created slice app.slice - User Application Slice. Sep 4 00:05:23.816465 systemd[1675]: Reached target paths.target - Paths. Sep 4 00:05:23.816575 systemd[1675]: Reached target timers.target - Timers. Sep 4 00:05:23.820490 systemd[1675]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 00:05:23.866053 systemd[1675]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 00:05:23.868181 systemd[1675]: Reached target sockets.target - Sockets. Sep 4 00:05:23.868301 systemd[1675]: Reached target basic.target - Basic System. Sep 4 00:05:23.868393 systemd[1675]: Reached target default.target - Main User Target. Sep 4 00:05:23.868458 systemd[1675]: Startup finished in 529ms. Sep 4 00:05:23.868722 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 00:05:23.886599 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 00:05:23.952197 tar[1543]: linux-amd64/README.md Sep 4 00:05:23.991688 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 00:05:24.047069 ntpd[1518]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:1a%2]:123 Sep 4 00:05:24.048904 ntpd[1518]: 4 Sep 00:05:24 ntpd[1518]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:1a%2]:123 Sep 4 00:05:24.149008 systemd[1]: Started sshd@1-10.128.0.26:22-147.75.109.163:49632.service - OpenSSH per-connection server daemon (147.75.109.163:49632). Sep 4 00:05:24.158653 instance-setup[1620]: INFO Running google_set_multiqueue. Sep 4 00:05:24.209027 instance-setup[1620]: INFO Set channels for eth0 to 2. Sep 4 00:05:24.217530 instance-setup[1620]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Sep 4 00:05:24.223500 instance-setup[1620]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Sep 4 00:05:24.224090 instance-setup[1620]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Sep 4 00:05:24.229782 instance-setup[1620]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Sep 4 00:05:24.234490 instance-setup[1620]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Sep 4 00:05:24.238631 instance-setup[1620]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Sep 4 00:05:24.240347 instance-setup[1620]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Sep 4 00:05:24.242305 instance-setup[1620]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Sep 4 00:05:24.252484 instance-setup[1620]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 4 00:05:24.258021 instance-setup[1620]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 4 00:05:24.260575 instance-setup[1620]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 4 00:05:24.260651 instance-setup[1620]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 4 00:05:24.303444 init.sh[1607]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 4 00:05:24.531991 sshd[1696]: Accepted publickey for core from 147.75.109.163 port 49632 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:05:24.540858 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:24.568017 systemd-logind[1522]: New session 2 of user core. Sep 4 00:05:24.576542 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 00:05:24.590710 startup-script[1721]: INFO Starting startup scripts. Sep 4 00:05:24.605590 startup-script[1721]: INFO No startup scripts found in metadata. Sep 4 00:05:24.605744 startup-script[1721]: INFO Finished running startup scripts. Sep 4 00:05:24.653180 init.sh[1607]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 4 00:05:24.653180 init.sh[1607]: + daemon_pids=() Sep 4 00:05:24.653180 init.sh[1607]: + for d in accounts clock_skew network Sep 4 00:05:24.653526 init.sh[1607]: + daemon_pids+=($!) Sep 4 00:05:24.653526 init.sh[1607]: + for d in accounts clock_skew network Sep 4 00:05:24.654620 init.sh[1726]: + /usr/bin/google_clock_skew_daemon Sep 4 00:05:24.655327 init.sh[1607]: + daemon_pids+=($!) Sep 4 00:05:24.656168 init.sh[1607]: + for d in accounts clock_skew network Sep 4 00:05:24.656168 init.sh[1607]: + daemon_pids+=($!) Sep 4 00:05:24.656168 init.sh[1607]: + NOTIFY_SOCKET=/run/systemd/notify Sep 4 00:05:24.656168 init.sh[1607]: + /usr/bin/systemd-notify --ready Sep 4 00:05:24.656826 init.sh[1727]: + /usr/bin/google_network_daemon Sep 4 00:05:24.657356 init.sh[1725]: + /usr/bin/google_accounts_daemon Sep 4 00:05:24.676862 systemd[1]: Started oem-gce.service - GCE Linux Agent. Sep 4 00:05:24.692303 init.sh[1607]: + wait -n 1725 1726 1727 Sep 4 00:05:24.804189 sshd[1724]: Connection closed by 147.75.109.163 port 49632 Sep 4 00:05:24.805053 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:24.822349 systemd[1]: sshd@1-10.128.0.26:22-147.75.109.163:49632.service: Deactivated successfully. Sep 4 00:05:24.831585 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 00:05:24.844902 systemd-logind[1522]: Session 2 logged out. Waiting for processes to exit. Sep 4 00:05:24.868675 systemd[1]: Started sshd@2-10.128.0.26:22-147.75.109.163:49642.service - OpenSSH per-connection server daemon (147.75.109.163:49642). Sep 4 00:05:24.883715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:24.901362 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 00:05:24.904017 systemd-logind[1522]: Removed session 2. Sep 4 00:05:24.908379 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:05:24.910695 systemd[1]: Startup finished in 4.507s (kernel) + 8.556s (initrd) + 10.539s (userspace) = 23.603s. Sep 4 00:05:25.320177 sshd[1738]: Accepted publickey for core from 147.75.109.163 port 49642 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:05:25.327052 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:25.347212 systemd-logind[1522]: New session 3 of user core. Sep 4 00:05:25.356511 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 00:05:25.391732 google-clock-skew[1726]: INFO Starting Google Clock Skew daemon. Sep 4 00:05:25.402462 google-clock-skew[1726]: INFO Clock drift token has changed: 0. Sep 4 00:05:25.412652 google-networking[1727]: INFO Starting Google Networking daemon. Sep 4 00:05:25.506479 groupadd[1759]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 4 00:05:25.516060 groupadd[1759]: group added to /etc/gshadow: name=google-sudoers Sep 4 00:05:25.573195 sshd[1756]: Connection closed by 147.75.109.163 port 49642 Sep 4 00:05:25.572584 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:26.001010 google-clock-skew[1726]: INFO Synced system time with hardware clock. Sep 4 00:05:26.001576 systemd-resolved[1384]: Clock change detected. Flushing caches. Sep 4 00:05:26.010238 groupadd[1759]: new group: name=google-sudoers, GID=1000 Sep 4 00:05:26.013815 systemd-logind[1522]: Session 3 logged out. Waiting for processes to exit. Sep 4 00:05:26.015154 systemd[1]: sshd@2-10.128.0.26:22-147.75.109.163:49642.service: Deactivated successfully. Sep 4 00:05:26.025753 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 00:05:26.031661 systemd-logind[1522]: Removed session 3. Sep 4 00:05:26.060845 google-accounts[1725]: INFO Starting Google Accounts daemon. Sep 4 00:05:26.084134 google-accounts[1725]: WARNING OS Login not installed. Sep 4 00:05:26.087528 google-accounts[1725]: INFO Creating a new user account for 0. Sep 4 00:05:26.094938 init.sh[1771]: useradd: invalid user name '0': use --badname to ignore Sep 4 00:05:26.095698 google-accounts[1725]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 4 00:05:26.452108 kubelet[1739]: E0904 00:05:26.451883 1739 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:05:26.457500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:05:26.457806 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:05:26.458713 systemd[1]: kubelet.service: Consumed 1.486s CPU time, 266M memory peak. Sep 4 00:05:36.056923 systemd[1]: Started sshd@3-10.128.0.26:22-147.75.109.163:38672.service - OpenSSH per-connection server daemon (147.75.109.163:38672). Sep 4 00:05:36.383951 sshd[1776]: Accepted publickey for core from 147.75.109.163 port 38672 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:05:36.386265 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:36.397643 systemd-logind[1522]: New session 4 of user core. Sep 4 00:05:36.405893 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 00:05:36.562539 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 00:05:36.566026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:36.604427 sshd[1778]: Connection closed by 147.75.109.163 port 38672 Sep 4 00:05:36.606364 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:36.616202 systemd[1]: sshd@3-10.128.0.26:22-147.75.109.163:38672.service: Deactivated successfully. Sep 4 00:05:36.621879 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 00:05:36.624782 systemd-logind[1522]: Session 4 logged out. Waiting for processes to exit. Sep 4 00:05:36.627819 systemd-logind[1522]: Removed session 4. Sep 4 00:05:36.667662 systemd[1]: Started sshd@4-10.128.0.26:22-147.75.109.163:38680.service - OpenSSH per-connection server daemon (147.75.109.163:38680). Sep 4 00:05:36.984518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:36.996259 sshd[1787]: Accepted publickey for core from 147.75.109.163 port 38680 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:05:36.998549 (kubelet)[1794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:05:36.999278 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:37.010119 systemd-logind[1522]: New session 5 of user core. Sep 4 00:05:37.018019 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 00:05:37.073420 kubelet[1794]: E0904 00:05:37.073342 1794 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:05:37.079708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:05:37.080078 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:05:37.080867 systemd[1]: kubelet.service: Consumed 276ms CPU time, 110.6M memory peak. Sep 4 00:05:37.208868 sshd[1800]: Connection closed by 147.75.109.163 port 38680 Sep 4 00:05:37.209931 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:37.216923 systemd[1]: sshd@4-10.128.0.26:22-147.75.109.163:38680.service: Deactivated successfully. Sep 4 00:05:37.220469 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 00:05:37.222345 systemd-logind[1522]: Session 5 logged out. Waiting for processes to exit. Sep 4 00:05:37.224920 systemd-logind[1522]: Removed session 5. Sep 4 00:05:37.263901 systemd[1]: Started sshd@5-10.128.0.26:22-147.75.109.163:38696.service - OpenSSH per-connection server daemon (147.75.109.163:38696). Sep 4 00:05:37.573079 sshd[1807]: Accepted publickey for core from 147.75.109.163 port 38696 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:05:37.575590 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:37.584538 systemd-logind[1522]: New session 6 of user core. Sep 4 00:05:37.591832 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 00:05:37.791579 sshd[1809]: Connection closed by 147.75.109.163 port 38696 Sep 4 00:05:37.792949 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:37.800841 systemd[1]: sshd@5-10.128.0.26:22-147.75.109.163:38696.service: Deactivated successfully. Sep 4 00:05:37.803925 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 00:05:37.805393 systemd-logind[1522]: Session 6 logged out. Waiting for processes to exit. Sep 4 00:05:37.808915 systemd-logind[1522]: Removed session 6. Sep 4 00:05:37.851609 systemd[1]: Started sshd@6-10.128.0.26:22-147.75.109.163:38708.service - OpenSSH per-connection server daemon (147.75.109.163:38708). Sep 4 00:05:38.176756 sshd[1815]: Accepted publickey for core from 147.75.109.163 port 38708 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:05:38.179330 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:38.189486 systemd-logind[1522]: New session 7 of user core. Sep 4 00:05:38.195795 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 00:05:38.377734 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 00:05:38.378380 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:05:38.395836 sudo[1818]: pam_unix(sudo:session): session closed for user root Sep 4 00:05:38.440150 sshd[1817]: Connection closed by 147.75.109.163 port 38708 Sep 4 00:05:38.441835 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:38.450118 systemd[1]: sshd@6-10.128.0.26:22-147.75.109.163:38708.service: Deactivated successfully. Sep 4 00:05:38.453184 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 00:05:38.454733 systemd-logind[1522]: Session 7 logged out. Waiting for processes to exit. Sep 4 00:05:38.457647 systemd-logind[1522]: Removed session 7. Sep 4 00:05:38.496860 systemd[1]: Started sshd@7-10.128.0.26:22-147.75.109.163:38712.service - OpenSSH per-connection server daemon (147.75.109.163:38712). Sep 4 00:05:38.827837 sshd[1824]: Accepted publickey for core from 147.75.109.163 port 38712 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:05:38.829974 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:38.839525 systemd-logind[1522]: New session 8 of user core. Sep 4 00:05:38.849924 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 00:05:39.013348 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 00:05:39.013998 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:05:39.022904 sudo[1828]: pam_unix(sudo:session): session closed for user root Sep 4 00:05:39.041413 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 00:05:39.042046 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:05:39.058270 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:05:39.130019 augenrules[1850]: No rules Sep 4 00:05:39.132243 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:05:39.132675 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:05:39.135477 sudo[1827]: pam_unix(sudo:session): session closed for user root Sep 4 00:05:39.179831 sshd[1826]: Connection closed by 147.75.109.163 port 38712 Sep 4 00:05:39.181189 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:39.187592 systemd[1]: sshd@7-10.128.0.26:22-147.75.109.163:38712.service: Deactivated successfully. Sep 4 00:05:39.190928 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 00:05:39.193752 systemd-logind[1522]: Session 8 logged out. Waiting for processes to exit. Sep 4 00:05:39.196766 systemd-logind[1522]: Removed session 8. Sep 4 00:05:39.236272 systemd[1]: Started sshd@8-10.128.0.26:22-147.75.109.163:38722.service - OpenSSH per-connection server daemon (147.75.109.163:38722). Sep 4 00:05:39.561295 sshd[1859]: Accepted publickey for core from 147.75.109.163 port 38722 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:05:39.562927 sshd-session[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:39.571527 systemd-logind[1522]: New session 9 of user core. Sep 4 00:05:39.578878 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 00:05:39.744237 sudo[1862]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 00:05:39.744889 sudo[1862]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:05:40.280983 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 00:05:40.301320 (dockerd)[1880]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 00:05:40.639866 dockerd[1880]: time="2025-09-04T00:05:40.639110757Z" level=info msg="Starting up" Sep 4 00:05:40.642818 dockerd[1880]: time="2025-09-04T00:05:40.642486542Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 00:05:40.726332 dockerd[1880]: time="2025-09-04T00:05:40.726199305Z" level=info msg="Loading containers: start." Sep 4 00:05:40.750482 kernel: Initializing XFRM netlink socket Sep 4 00:05:41.157974 systemd-networkd[1457]: docker0: Link UP Sep 4 00:05:41.165075 dockerd[1880]: time="2025-09-04T00:05:41.165004464Z" level=info msg="Loading containers: done." Sep 4 00:05:41.183795 dockerd[1880]: time="2025-09-04T00:05:41.183141722Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 00:05:41.183795 dockerd[1880]: time="2025-09-04T00:05:41.183301035Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 4 00:05:41.183795 dockerd[1880]: time="2025-09-04T00:05:41.183518657Z" level=info msg="Initializing buildkit" Sep 4 00:05:41.221414 dockerd[1880]: time="2025-09-04T00:05:41.221337591Z" level=info msg="Completed buildkit initialization" Sep 4 00:05:41.231764 dockerd[1880]: time="2025-09-04T00:05:41.231664266Z" level=info msg="Daemon has completed initialization" Sep 4 00:05:41.232600 dockerd[1880]: time="2025-09-04T00:05:41.231870604Z" level=info msg="API listen on /run/docker.sock" Sep 4 00:05:41.232039 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 00:05:42.239326 containerd[1554]: time="2025-09-04T00:05:42.239236570Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 00:05:42.909933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1441719842.mount: Deactivated successfully. Sep 4 00:05:44.747086 containerd[1554]: time="2025-09-04T00:05:44.746956110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:44.749062 containerd[1554]: time="2025-09-04T00:05:44.749005371Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28807315" Sep 4 00:05:44.751476 containerd[1554]: time="2025-09-04T00:05:44.750303133Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:44.755679 containerd[1554]: time="2025-09-04T00:05:44.755614384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:44.759780 containerd[1554]: time="2025-09-04T00:05:44.759683640Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.520339156s" Sep 4 00:05:44.759780 containerd[1554]: time="2025-09-04T00:05:44.759763298Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 4 00:05:44.760877 containerd[1554]: time="2025-09-04T00:05:44.760729830Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 00:05:46.464147 containerd[1554]: time="2025-09-04T00:05:46.463975536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:46.466750 containerd[1554]: time="2025-09-04T00:05:46.466361008Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24786062" Sep 4 00:05:46.468492 containerd[1554]: time="2025-09-04T00:05:46.468401571Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:46.473194 containerd[1554]: time="2025-09-04T00:05:46.473097026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:46.475605 containerd[1554]: time="2025-09-04T00:05:46.474918022Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.714130922s" Sep 4 00:05:46.475605 containerd[1554]: time="2025-09-04T00:05:46.474975802Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 4 00:05:46.476294 containerd[1554]: time="2025-09-04T00:05:46.476248528Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 00:05:47.331779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 00:05:47.335733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:47.705748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:47.721224 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:05:47.852606 kubelet[2153]: E0904 00:05:47.852102 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:05:47.862412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:05:47.862752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:05:47.863735 systemd[1]: kubelet.service: Consumed 320ms CPU time, 109.8M memory peak. Sep 4 00:05:48.197874 containerd[1554]: time="2025-09-04T00:05:48.197579142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:48.200291 containerd[1554]: time="2025-09-04T00:05:48.200184620Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19176952" Sep 4 00:05:48.202281 containerd[1554]: time="2025-09-04T00:05:48.202168501Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:48.208206 containerd[1554]: time="2025-09-04T00:05:48.207562099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:48.209279 containerd[1554]: time="2025-09-04T00:05:48.209215451Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.732726287s" Sep 4 00:05:48.209506 containerd[1554]: time="2025-09-04T00:05:48.209472637Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 4 00:05:48.210609 containerd[1554]: time="2025-09-04T00:05:48.210575221Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 00:05:49.507124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount53830390.mount: Deactivated successfully. Sep 4 00:05:50.304473 containerd[1554]: time="2025-09-04T00:05:50.304324120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:50.305985 containerd[1554]: time="2025-09-04T00:05:50.305920180Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30899065" Sep 4 00:05:50.308099 containerd[1554]: time="2025-09-04T00:05:50.307994222Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:50.312479 containerd[1554]: time="2025-09-04T00:05:50.312380179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:50.313912 containerd[1554]: time="2025-09-04T00:05:50.313856027Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 2.102868113s" Sep 4 00:05:50.314277 containerd[1554]: time="2025-09-04T00:05:50.314230354Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 4 00:05:50.315218 containerd[1554]: time="2025-09-04T00:05:50.315149704Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 00:05:50.824280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348634853.mount: Deactivated successfully. Sep 4 00:05:52.265014 containerd[1554]: time="2025-09-04T00:05:52.264908554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:52.268082 containerd[1554]: time="2025-09-04T00:05:52.267196817Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Sep 4 00:05:52.270197 containerd[1554]: time="2025-09-04T00:05:52.270127955Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:52.276188 containerd[1554]: time="2025-09-04T00:05:52.276103299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:52.278560 containerd[1554]: time="2025-09-04T00:05:52.278491488Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.963290317s" Sep 4 00:05:52.278920 containerd[1554]: time="2025-09-04T00:05:52.278871713Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 00:05:52.280019 containerd[1554]: time="2025-09-04T00:05:52.279959555Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 00:05:52.779023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580513551.mount: Deactivated successfully. Sep 4 00:05:52.789644 containerd[1554]: time="2025-09-04T00:05:52.789562794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:05:52.791155 containerd[1554]: time="2025-09-04T00:05:52.791086848Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Sep 4 00:05:52.792536 containerd[1554]: time="2025-09-04T00:05:52.792486269Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:05:52.796953 containerd[1554]: time="2025-09-04T00:05:52.796864581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:05:52.798640 containerd[1554]: time="2025-09-04T00:05:52.797937065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 517.918433ms" Sep 4 00:05:52.798640 containerd[1554]: time="2025-09-04T00:05:52.797989342Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 00:05:52.799328 containerd[1554]: time="2025-09-04T00:05:52.799268322Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 00:05:53.327023 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 00:05:53.352289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount108466114.mount: Deactivated successfully. Sep 4 00:05:55.952517 containerd[1554]: time="2025-09-04T00:05:55.952394483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:55.954453 containerd[1554]: time="2025-09-04T00:05:55.954390675Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57689565" Sep 4 00:05:55.956481 containerd[1554]: time="2025-09-04T00:05:55.955776906Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:55.961625 containerd[1554]: time="2025-09-04T00:05:55.961474203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:55.963652 containerd[1554]: time="2025-09-04T00:05:55.963400606Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.164084402s" Sep 4 00:05:55.963652 containerd[1554]: time="2025-09-04T00:05:55.963481170Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 4 00:05:58.049116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 00:05:58.053550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:58.411657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:58.424316 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:05:58.503288 kubelet[2311]: E0904 00:05:58.503207 2311 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:05:58.508547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:05:58.508839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:05:58.510031 systemd[1]: kubelet.service: Consumed 286ms CPU time, 108.5M memory peak. Sep 4 00:06:00.025647 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:06:00.026421 systemd[1]: kubelet.service: Consumed 286ms CPU time, 108.5M memory peak. Sep 4 00:06:00.031790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:06:00.092215 systemd[1]: Reload requested from client PID 2325 ('systemctl') (unit session-9.scope)... Sep 4 00:06:00.092246 systemd[1]: Reloading... Sep 4 00:06:00.394485 zram_generator::config[2369]: No configuration found. Sep 4 00:06:00.536167 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:06:00.721349 systemd[1]: Reloading finished in 628 ms. Sep 4 00:06:00.806526 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 00:06:00.806700 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 00:06:00.807207 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:06:00.807295 systemd[1]: kubelet.service: Consumed 228ms CPU time, 98.2M memory peak. Sep 4 00:06:00.810341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:06:01.137385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:06:01.152840 (kubelet)[2420]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:06:01.232803 kubelet[2420]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:06:01.232803 kubelet[2420]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 00:06:01.232803 kubelet[2420]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:06:01.233577 kubelet[2420]: I0904 00:06:01.232931 2420 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:06:01.845292 kubelet[2420]: I0904 00:06:01.845210 2420 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 00:06:01.845292 kubelet[2420]: I0904 00:06:01.845273 2420 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:06:01.846115 kubelet[2420]: I0904 00:06:01.846081 2420 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 00:06:01.901580 kubelet[2420]: E0904 00:06:01.901518 2420 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:01.904730 kubelet[2420]: I0904 00:06:01.904678 2420 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:06:01.917591 kubelet[2420]: I0904 00:06:01.917534 2420 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:06:01.924265 kubelet[2420]: I0904 00:06:01.924213 2420 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:06:01.926117 kubelet[2420]: I0904 00:06:01.926019 2420 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:06:01.926441 kubelet[2420]: I0904 00:06:01.926106 2420 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:06:01.926691 kubelet[2420]: I0904 00:06:01.926467 2420 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:06:01.926691 kubelet[2420]: I0904 00:06:01.926494 2420 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 00:06:01.926828 kubelet[2420]: I0904 00:06:01.926728 2420 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:06:01.934947 kubelet[2420]: I0904 00:06:01.934860 2420 kubelet.go:446] "Attempting to sync node with API server" Sep 4 00:06:01.934947 kubelet[2420]: I0904 00:06:01.934954 2420 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:06:01.937693 kubelet[2420]: I0904 00:06:01.935004 2420 kubelet.go:352] "Adding apiserver pod source" Sep 4 00:06:01.937693 kubelet[2420]: I0904 00:06:01.935028 2420 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:06:01.948949 kubelet[2420]: W0904 00:06:01.948846 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Sep 4 00:06:01.948949 kubelet[2420]: E0904 00:06:01.948967 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:01.949287 kubelet[2420]: I0904 00:06:01.949145 2420 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:06:01.949981 kubelet[2420]: I0904 00:06:01.949907 2420 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:06:01.950131 kubelet[2420]: W0904 00:06:01.950050 2420 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 00:06:01.955468 kubelet[2420]: I0904 00:06:01.955398 2420 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 00:06:01.955657 kubelet[2420]: I0904 00:06:01.955502 2420 server.go:1287] "Started kubelet" Sep 4 00:06:01.955899 kubelet[2420]: W0904 00:06:01.955808 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc&limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Sep 4 00:06:01.956011 kubelet[2420]: E0904 00:06:01.955909 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc&limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:01.957622 kubelet[2420]: I0904 00:06:01.957568 2420 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:06:01.960332 kubelet[2420]: I0904 00:06:01.960263 2420 server.go:479] "Adding debug handlers to kubelet server" Sep 4 00:06:01.965593 kubelet[2420]: I0904 00:06:01.964729 2420 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:06:01.965593 kubelet[2420]: I0904 00:06:01.965268 2420 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:06:01.968555 kubelet[2420]: I0904 00:06:01.967796 2420 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:06:01.973069 kubelet[2420]: I0904 00:06:01.973032 2420 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:06:01.973369 kubelet[2420]: E0904 00:06:01.973332 2420 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:06:01.976894 kubelet[2420]: I0904 00:06:01.976861 2420 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 00:06:01.977545 kubelet[2420]: E0904 00:06:01.977515 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" not found" Sep 4 00:06:01.978670 kubelet[2420]: I0904 00:06:01.978644 2420 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 00:06:01.978889 kubelet[2420]: I0904 00:06:01.978870 2420 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:06:01.982844 kubelet[2420]: E0904 00:06:01.980561 2420 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc.1861eb96a87afb9b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc,UID:ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc,},FirstTimestamp:2025-09-04 00:06:01.955457947 +0000 UTC m=+0.792451250,LastTimestamp:2025-09-04 00:06:01.955457947 +0000 UTC m=+0.792451250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc,}" Sep 4 00:06:01.983085 kubelet[2420]: W0904 00:06:01.982876 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Sep 4 00:06:01.983085 kubelet[2420]: E0904 00:06:01.982963 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:01.983185 kubelet[2420]: E0904 00:06:01.983083 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="200ms" Sep 4 00:06:01.985187 kubelet[2420]: I0904 00:06:01.984725 2420 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:06:01.985187 kubelet[2420]: I0904 00:06:01.984860 2420 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:06:01.987500 kubelet[2420]: I0904 00:06:01.987088 2420 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:06:02.007911 kubelet[2420]: I0904 00:06:02.007796 2420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:06:02.010886 kubelet[2420]: I0904 00:06:02.010809 2420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:06:02.010886 kubelet[2420]: I0904 00:06:02.010856 2420 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 00:06:02.011126 kubelet[2420]: I0904 00:06:02.010900 2420 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 00:06:02.011126 kubelet[2420]: I0904 00:06:02.010918 2420 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 00:06:02.011126 kubelet[2420]: E0904 00:06:02.011023 2420 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:06:02.026791 kubelet[2420]: W0904 00:06:02.026569 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Sep 4 00:06:02.026791 kubelet[2420]: E0904 00:06:02.026703 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:02.059017 kubelet[2420]: I0904 00:06:02.058943 2420 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 00:06:02.059017 kubelet[2420]: I0904 00:06:02.058973 2420 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 00:06:02.059017 kubelet[2420]: I0904 00:06:02.059010 2420 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:06:02.064304 kubelet[2420]: I0904 00:06:02.062352 2420 policy_none.go:49] "None policy: Start" Sep 4 00:06:02.064304 kubelet[2420]: I0904 00:06:02.062400 2420 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 00:06:02.064304 kubelet[2420]: I0904 00:06:02.062421 2420 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:06:02.075405 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 00:06:02.077955 kubelet[2420]: E0904 00:06:02.077890 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" not found" Sep 4 00:06:02.094061 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 00:06:02.102007 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 00:06:02.111779 kubelet[2420]: E0904 00:06:02.111707 2420 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 00:06:02.114249 kubelet[2420]: I0904 00:06:02.113588 2420 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:06:02.114249 kubelet[2420]: I0904 00:06:02.113995 2420 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:06:02.114249 kubelet[2420]: I0904 00:06:02.114015 2420 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:06:02.114548 kubelet[2420]: I0904 00:06:02.114468 2420 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:06:02.118467 kubelet[2420]: E0904 00:06:02.118073 2420 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 00:06:02.119115 kubelet[2420]: E0904 00:06:02.118872 2420 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" not found" Sep 4 00:06:02.184389 kubelet[2420]: E0904 00:06:02.184294 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="400ms" Sep 4 00:06:02.221315 kubelet[2420]: I0904 00:06:02.221209 2420 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.222038 kubelet[2420]: E0904 00:06:02.221979 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.26:6443/api/v1/nodes\": dial tcp 10.128.0.26:6443: connect: connection refused" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.338005 systemd[1]: Created slice kubepods-burstable-podce85c2765ba4ec9afe67803f428009b2.slice - libcontainer container kubepods-burstable-podce85c2765ba4ec9afe67803f428009b2.slice. Sep 4 00:06:02.350475 kubelet[2420]: E0904 00:06:02.350285 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" not found" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.357260 systemd[1]: Created slice kubepods-burstable-podefc7731598c79bfd3cdb007aebb35a07.slice - libcontainer container kubepods-burstable-podefc7731598c79bfd3cdb007aebb35a07.slice. Sep 4 00:06:02.364407 kubelet[2420]: E0904 00:06:02.363828 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" not found" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.376006 systemd[1]: Created slice kubepods-burstable-pod90508dd84af6c483a8ea9a96425e4ca9.slice - libcontainer container kubepods-burstable-pod90508dd84af6c483a8ea9a96425e4ca9.slice. Sep 4 00:06:02.380643 kubelet[2420]: I0904 00:06:02.380546 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/efc7731598c79bfd3cdb007aebb35a07-ca-certs\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"efc7731598c79bfd3cdb007aebb35a07\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.380910 kubelet[2420]: I0904 00:06:02.380674 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/efc7731598c79bfd3cdb007aebb35a07-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"efc7731598c79bfd3cdb007aebb35a07\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.380910 kubelet[2420]: I0904 00:06:02.380743 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/efc7731598c79bfd3cdb007aebb35a07-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"efc7731598c79bfd3cdb007aebb35a07\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.380910 kubelet[2420]: I0904 00:06:02.380813 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/90508dd84af6c483a8ea9a96425e4ca9-kubeconfig\") pod \"kube-scheduler-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"90508dd84af6c483a8ea9a96425e4ca9\") " pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.380910 kubelet[2420]: I0904 00:06:02.380846 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce85c2765ba4ec9afe67803f428009b2-ca-certs\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"ce85c2765ba4ec9afe67803f428009b2\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.381157 kubelet[2420]: I0904 00:06:02.380927 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce85c2765ba4ec9afe67803f428009b2-k8s-certs\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"ce85c2765ba4ec9afe67803f428009b2\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.381157 kubelet[2420]: I0904 00:06:02.381007 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce85c2765ba4ec9afe67803f428009b2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"ce85c2765ba4ec9afe67803f428009b2\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.381157 kubelet[2420]: I0904 00:06:02.381078 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/efc7731598c79bfd3cdb007aebb35a07-k8s-certs\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"efc7731598c79bfd3cdb007aebb35a07\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.381157 kubelet[2420]: I0904 00:06:02.381145 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/efc7731598c79bfd3cdb007aebb35a07-kubeconfig\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"efc7731598c79bfd3cdb007aebb35a07\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.383299 kubelet[2420]: E0904 00:06:02.382897 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" not found" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.428404 kubelet[2420]: I0904 00:06:02.428349 2420 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.428989 kubelet[2420]: E0904 00:06:02.428937 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.26:6443/api/v1/nodes\": dial tcp 10.128.0.26:6443: connect: connection refused" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.585491 kubelet[2420]: E0904 00:06:02.585367 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="800ms" Sep 4 00:06:02.654526 containerd[1554]: time="2025-09-04T00:06:02.654071305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc,Uid:ce85c2765ba4ec9afe67803f428009b2,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:02.666044 containerd[1554]: time="2025-09-04T00:06:02.665873298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc,Uid:efc7731598c79bfd3cdb007aebb35a07,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:02.686034 containerd[1554]: time="2025-09-04T00:06:02.685764839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc,Uid:90508dd84af6c483a8ea9a96425e4ca9,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:02.728615 containerd[1554]: time="2025-09-04T00:06:02.728304038Z" level=info msg="connecting to shim d524a3e773c9e775e33bb4d0b9b72327f02c731c135d14f2f4b15470dc890449" address="unix:///run/containerd/s/2db507ee5d33d6784b3992e2465cb86ae1ade8bfb6daf95b010ee3c6f3bf109f" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:02.771908 containerd[1554]: time="2025-09-04T00:06:02.771747438Z" level=info msg="connecting to shim d9984f87791bdefc2c5b3057ed10d8e7b8889efb18169d8c20aaf46bc203bb8b" address="unix:///run/containerd/s/ca39d395a8966c6eaaf914a58fea1b50ea86eb942835edc1f4d3443bc1adb7ff" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:02.839822 kubelet[2420]: I0904 00:06:02.838926 2420 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.840841 kubelet[2420]: E0904 00:06:02.840794 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.26:6443/api/v1/nodes\": dial tcp 10.128.0.26:6443: connect: connection refused" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:02.850007 containerd[1554]: time="2025-09-04T00:06:02.849915219Z" level=info msg="connecting to shim 4d21fabdc9f5da20e43cf7256b0d657a02ccf2833641ffdd5253004cbc6651f7" address="unix:///run/containerd/s/553b4f4dfc93ceee606df6b06c25db99b76eef869c5588b749a960170b02d201" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:02.851806 systemd[1]: Started cri-containerd-d524a3e773c9e775e33bb4d0b9b72327f02c731c135d14f2f4b15470dc890449.scope - libcontainer container d524a3e773c9e775e33bb4d0b9b72327f02c731c135d14f2f4b15470dc890449. Sep 4 00:06:02.947854 systemd[1]: Started cri-containerd-d9984f87791bdefc2c5b3057ed10d8e7b8889efb18169d8c20aaf46bc203bb8b.scope - libcontainer container d9984f87791bdefc2c5b3057ed10d8e7b8889efb18169d8c20aaf46bc203bb8b. Sep 4 00:06:02.976916 systemd[1]: Started cri-containerd-4d21fabdc9f5da20e43cf7256b0d657a02ccf2833641ffdd5253004cbc6651f7.scope - libcontainer container 4d21fabdc9f5da20e43cf7256b0d657a02ccf2833641ffdd5253004cbc6651f7. Sep 4 00:06:03.043703 kubelet[2420]: W0904 00:06:03.043587 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Sep 4 00:06:03.044798 kubelet[2420]: E0904 00:06:03.043996 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:03.104969 kubelet[2420]: W0904 00:06:03.104110 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Sep 4 00:06:03.105336 kubelet[2420]: E0904 00:06:03.104991 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:03.108071 containerd[1554]: time="2025-09-04T00:06:03.107915070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc,Uid:ce85c2765ba4ec9afe67803f428009b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d524a3e773c9e775e33bb4d0b9b72327f02c731c135d14f2f4b15470dc890449\"" Sep 4 00:06:03.118951 kubelet[2420]: E0904 00:06:03.116945 2420 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208" Sep 4 00:06:03.125033 containerd[1554]: time="2025-09-04T00:06:03.122150055Z" level=info msg="CreateContainer within sandbox \"d524a3e773c9e775e33bb4d0b9b72327f02c731c135d14f2f4b15470dc890449\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 00:06:03.193841 kubelet[2420]: W0904 00:06:03.193697 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc&limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Sep 4 00:06:03.194070 kubelet[2420]: E0904 00:06:03.193894 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc&limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:03.342211 containerd[1554]: time="2025-09-04T00:06:03.342122420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc,Uid:efc7731598c79bfd3cdb007aebb35a07,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9984f87791bdefc2c5b3057ed10d8e7b8889efb18169d8c20aaf46bc203bb8b\"" Sep 4 00:06:03.346312 containerd[1554]: time="2025-09-04T00:06:03.346221193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc,Uid:90508dd84af6c483a8ea9a96425e4ca9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d21fabdc9f5da20e43cf7256b0d657a02ccf2833641ffdd5253004cbc6651f7\"" Sep 4 00:06:03.349011 kubelet[2420]: E0904 00:06:03.348899 2420 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf3" Sep 4 00:06:03.350744 kubelet[2420]: E0904 00:06:03.350677 2420 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208" Sep 4 00:06:03.353487 containerd[1554]: time="2025-09-04T00:06:03.352995196Z" level=info msg="CreateContainer within sandbox \"d9984f87791bdefc2c5b3057ed10d8e7b8889efb18169d8c20aaf46bc203bb8b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 00:06:03.354314 containerd[1554]: time="2025-09-04T00:06:03.354134915Z" level=info msg="CreateContainer within sandbox \"4d21fabdc9f5da20e43cf7256b0d657a02ccf2833641ffdd5253004cbc6651f7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 00:06:03.358163 containerd[1554]: time="2025-09-04T00:06:03.358106196Z" level=info msg="Container c1633774369cfa6f7e66ef8770d9c0f1cd74fed20f06c344f6c76d241925d8d1: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:03.372838 containerd[1554]: time="2025-09-04T00:06:03.372704877Z" level=info msg="Container 18f5bc37891817e196dde4b9304fd4665fcaf676ba3dd1b480fc471f8f96d7b9: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:03.381484 containerd[1554]: time="2025-09-04T00:06:03.380851598Z" level=info msg="Container c81e35f3b4568dc3bd76e8cd06954565547d854c57e72df7677b88a15549553a: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:03.381484 containerd[1554]: time="2025-09-04T00:06:03.381058887Z" level=info msg="CreateContainer within sandbox \"d524a3e773c9e775e33bb4d0b9b72327f02c731c135d14f2f4b15470dc890449\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c1633774369cfa6f7e66ef8770d9c0f1cd74fed20f06c344f6c76d241925d8d1\"" Sep 4 00:06:03.382349 containerd[1554]: time="2025-09-04T00:06:03.382287825Z" level=info msg="StartContainer for \"c1633774369cfa6f7e66ef8770d9c0f1cd74fed20f06c344f6c76d241925d8d1\"" Sep 4 00:06:03.385569 containerd[1554]: time="2025-09-04T00:06:03.385392106Z" level=info msg="connecting to shim c1633774369cfa6f7e66ef8770d9c0f1cd74fed20f06c344f6c76d241925d8d1" address="unix:///run/containerd/s/2db507ee5d33d6784b3992e2465cb86ae1ade8bfb6daf95b010ee3c6f3bf109f" protocol=ttrpc version=3 Sep 4 00:06:03.387372 kubelet[2420]: E0904 00:06:03.387301 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="1.6s" Sep 4 00:06:03.391507 containerd[1554]: time="2025-09-04T00:06:03.391258694Z" level=info msg="CreateContainer within sandbox \"4d21fabdc9f5da20e43cf7256b0d657a02ccf2833641ffdd5253004cbc6651f7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"18f5bc37891817e196dde4b9304fd4665fcaf676ba3dd1b480fc471f8f96d7b9\"" Sep 4 00:06:03.392535 containerd[1554]: time="2025-09-04T00:06:03.392463790Z" level=info msg="StartContainer for \"18f5bc37891817e196dde4b9304fd4665fcaf676ba3dd1b480fc471f8f96d7b9\"" Sep 4 00:06:03.395710 containerd[1554]: time="2025-09-04T00:06:03.395648262Z" level=info msg="connecting to shim 18f5bc37891817e196dde4b9304fd4665fcaf676ba3dd1b480fc471f8f96d7b9" address="unix:///run/containerd/s/553b4f4dfc93ceee606df6b06c25db99b76eef869c5588b749a960170b02d201" protocol=ttrpc version=3 Sep 4 00:06:03.401989 containerd[1554]: time="2025-09-04T00:06:03.401925560Z" level=info msg="CreateContainer within sandbox \"d9984f87791bdefc2c5b3057ed10d8e7b8889efb18169d8c20aaf46bc203bb8b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c81e35f3b4568dc3bd76e8cd06954565547d854c57e72df7677b88a15549553a\"" Sep 4 00:06:03.402821 containerd[1554]: time="2025-09-04T00:06:03.402783033Z" level=info msg="StartContainer for \"c81e35f3b4568dc3bd76e8cd06954565547d854c57e72df7677b88a15549553a\"" Sep 4 00:06:03.408683 containerd[1554]: time="2025-09-04T00:06:03.408623497Z" level=info msg="connecting to shim c81e35f3b4568dc3bd76e8cd06954565547d854c57e72df7677b88a15549553a" address="unix:///run/containerd/s/ca39d395a8966c6eaaf914a58fea1b50ea86eb942835edc1f4d3443bc1adb7ff" protocol=ttrpc version=3 Sep 4 00:06:03.434966 systemd[1]: Started cri-containerd-c1633774369cfa6f7e66ef8770d9c0f1cd74fed20f06c344f6c76d241925d8d1.scope - libcontainer container c1633774369cfa6f7e66ef8770d9c0f1cd74fed20f06c344f6c76d241925d8d1. Sep 4 00:06:03.462242 systemd[1]: Started cri-containerd-18f5bc37891817e196dde4b9304fd4665fcaf676ba3dd1b480fc471f8f96d7b9.scope - libcontainer container 18f5bc37891817e196dde4b9304fd4665fcaf676ba3dd1b480fc471f8f96d7b9. Sep 4 00:06:03.490749 systemd[1]: Started cri-containerd-c81e35f3b4568dc3bd76e8cd06954565547d854c57e72df7677b88a15549553a.scope - libcontainer container c81e35f3b4568dc3bd76e8cd06954565547d854c57e72df7677b88a15549553a. Sep 4 00:06:03.525390 kubelet[2420]: W0904 00:06:03.525258 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Sep 4 00:06:03.526536 kubelet[2420]: E0904 00:06:03.525411 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:03.610384 containerd[1554]: time="2025-09-04T00:06:03.608961219Z" level=info msg="StartContainer for \"c1633774369cfa6f7e66ef8770d9c0f1cd74fed20f06c344f6c76d241925d8d1\" returns successfully" Sep 4 00:06:03.650535 kubelet[2420]: I0904 00:06:03.650056 2420 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:03.651606 kubelet[2420]: E0904 00:06:03.651549 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.26:6443/api/v1/nodes\": dial tcp 10.128.0.26:6443: connect: connection refused" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:03.676257 containerd[1554]: time="2025-09-04T00:06:03.676137887Z" level=info msg="StartContainer for \"c81e35f3b4568dc3bd76e8cd06954565547d854c57e72df7677b88a15549553a\" returns successfully" Sep 4 00:06:03.742631 containerd[1554]: time="2025-09-04T00:06:03.742500023Z" level=info msg="StartContainer for \"18f5bc37891817e196dde4b9304fd4665fcaf676ba3dd1b480fc471f8f96d7b9\" returns successfully" Sep 4 00:06:04.070221 kubelet[2420]: E0904 00:06:04.069962 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" not found" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:04.072734 kubelet[2420]: E0904 00:06:04.071157 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" not found" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:04.072922 kubelet[2420]: E0904 00:06:04.071372 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" not found" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:05.072812 kubelet[2420]: E0904 00:06:05.072752 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" not found" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:05.074237 kubelet[2420]: E0904 00:06:05.073807 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" not found" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:05.263944 kubelet[2420]: I0904 00:06:05.263813 2420 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:06.075237 kubelet[2420]: E0904 00:06:06.075176 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" not found" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:06.693536 kubelet[2420]: E0904 00:06:06.693460 2420 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" not found" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:06.778485 kubelet[2420]: I0904 00:06:06.777654 2420 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:06.778485 kubelet[2420]: I0904 00:06:06.778199 2420 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:06.798493 kubelet[2420]: E0904 00:06:06.798410 2420 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:06.801091 kubelet[2420]: I0904 00:06:06.800842 2420 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:06.809361 kubelet[2420]: E0904 00:06:06.809306 2420 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:06.809361 kubelet[2420]: I0904 00:06:06.809358 2420 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:06.814490 kubelet[2420]: E0904 00:06:06.813233 2420 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:06.943352 kubelet[2420]: I0904 00:06:06.943273 2420 apiserver.go:52] "Watching apiserver" Sep 4 00:06:06.979589 kubelet[2420]: I0904 00:06:06.979363 2420 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 00:06:07.728494 update_engine[1523]: I20250904 00:06:07.727790 1523 update_attempter.cc:509] Updating boot flags... Sep 4 00:06:09.664106 systemd[1]: Reload requested from client PID 2710 ('systemctl') (unit session-9.scope)... Sep 4 00:06:09.664148 systemd[1]: Reloading... Sep 4 00:06:09.972502 zram_generator::config[2760]: No configuration found. Sep 4 00:06:10.126389 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:06:10.378295 systemd[1]: Reloading finished in 713 ms. Sep 4 00:06:10.430971 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:06:10.453260 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 00:06:10.453721 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:06:10.453829 systemd[1]: kubelet.service: Consumed 1.607s CPU time, 131M memory peak. Sep 4 00:06:10.460796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:06:10.831202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:06:10.849250 (kubelet)[2802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:06:10.962527 kubelet[2802]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:06:10.962527 kubelet[2802]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 00:06:10.962527 kubelet[2802]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:06:10.962527 kubelet[2802]: I0904 00:06:10.961920 2802 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:06:10.983017 kubelet[2802]: I0904 00:06:10.982931 2802 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 00:06:10.983017 kubelet[2802]: I0904 00:06:10.982998 2802 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:06:10.983728 kubelet[2802]: I0904 00:06:10.983671 2802 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 00:06:10.985949 kubelet[2802]: I0904 00:06:10.985900 2802 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 00:06:10.992700 kubelet[2802]: I0904 00:06:10.992423 2802 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:06:11.025220 kubelet[2802]: I0904 00:06:11.024303 2802 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:06:11.030855 kubelet[2802]: I0904 00:06:11.030756 2802 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:06:11.032461 kubelet[2802]: I0904 00:06:11.031862 2802 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:06:11.032461 kubelet[2802]: I0904 00:06:11.031932 2802 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:06:11.032461 kubelet[2802]: I0904 00:06:11.032318 2802 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:06:11.032461 kubelet[2802]: I0904 00:06:11.032340 2802 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 00:06:11.032929 kubelet[2802]: I0904 00:06:11.032873 2802 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:06:11.034773 kubelet[2802]: I0904 00:06:11.033235 2802 kubelet.go:446] "Attempting to sync node with API server" Sep 4 00:06:11.034773 kubelet[2802]: I0904 00:06:11.033278 2802 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:06:11.034773 kubelet[2802]: I0904 00:06:11.033327 2802 kubelet.go:352] "Adding apiserver pod source" Sep 4 00:06:11.034773 kubelet[2802]: I0904 00:06:11.033351 2802 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:06:11.038547 kubelet[2802]: I0904 00:06:11.038500 2802 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:06:11.039351 kubelet[2802]: I0904 00:06:11.039287 2802 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:06:11.042868 kubelet[2802]: I0904 00:06:11.042136 2802 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 00:06:11.042868 kubelet[2802]: I0904 00:06:11.042193 2802 server.go:1287] "Started kubelet" Sep 4 00:06:11.051506 kubelet[2802]: I0904 00:06:11.050022 2802 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:06:11.065627 kubelet[2802]: I0904 00:06:11.065540 2802 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:06:11.071337 kubelet[2802]: I0904 00:06:11.069536 2802 server.go:479] "Adding debug handlers to kubelet server" Sep 4 00:06:11.077182 kubelet[2802]: I0904 00:06:11.075369 2802 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:06:11.077182 kubelet[2802]: I0904 00:06:11.076620 2802 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:06:11.077182 kubelet[2802]: I0904 00:06:11.076964 2802 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:06:11.085141 kubelet[2802]: I0904 00:06:11.084964 2802 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 00:06:11.088386 kubelet[2802]: E0904 00:06:11.087390 2802 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" not found" Sep 4 00:06:11.092345 kubelet[2802]: I0904 00:06:11.092300 2802 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 00:06:11.096528 kubelet[2802]: I0904 00:06:11.093696 2802 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:06:11.113682 kubelet[2802]: I0904 00:06:11.113614 2802 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:06:11.113926 kubelet[2802]: I0904 00:06:11.113804 2802 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:06:11.138631 kubelet[2802]: I0904 00:06:11.138586 2802 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:06:11.143858 kubelet[2802]: E0904 00:06:11.143037 2802 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:06:11.145462 kubelet[2802]: I0904 00:06:11.144614 2802 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:06:11.163600 kubelet[2802]: I0904 00:06:11.163518 2802 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:06:11.164209 kubelet[2802]: I0904 00:06:11.163957 2802 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 00:06:11.164538 kubelet[2802]: I0904 00:06:11.164185 2802 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 00:06:11.164538 kubelet[2802]: I0904 00:06:11.164483 2802 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 00:06:11.165122 kubelet[2802]: E0904 00:06:11.164986 2802 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:06:11.265615 kubelet[2802]: E0904 00:06:11.265547 2802 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 00:06:11.277483 kubelet[2802]: I0904 00:06:11.276054 2802 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 00:06:11.277483 kubelet[2802]: I0904 00:06:11.276088 2802 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 00:06:11.277483 kubelet[2802]: I0904 00:06:11.276127 2802 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:06:11.277483 kubelet[2802]: I0904 00:06:11.276740 2802 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 00:06:11.277483 kubelet[2802]: I0904 00:06:11.276765 2802 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 00:06:11.277483 kubelet[2802]: I0904 00:06:11.276795 2802 policy_none.go:49] "None policy: Start" Sep 4 00:06:11.277483 kubelet[2802]: I0904 00:06:11.276815 2802 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 00:06:11.277483 kubelet[2802]: I0904 00:06:11.276849 2802 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:06:11.277483 kubelet[2802]: I0904 00:06:11.277090 2802 state_mem.go:75] "Updated machine memory state" Sep 4 00:06:11.290060 kubelet[2802]: I0904 00:06:11.290013 2802 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:06:11.292211 kubelet[2802]: I0904 00:06:11.292175 2802 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:06:11.292656 kubelet[2802]: I0904 00:06:11.292207 2802 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:06:11.293850 kubelet[2802]: I0904 00:06:11.293570 2802 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:06:11.303124 kubelet[2802]: E0904 00:06:11.302744 2802 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 00:06:11.427572 kubelet[2802]: I0904 00:06:11.427356 2802 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:11.450917 kubelet[2802]: I0904 00:06:11.450840 2802 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:11.451170 kubelet[2802]: I0904 00:06:11.451017 2802 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:11.467969 kubelet[2802]: I0904 00:06:11.466980 2802 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:11.468536 kubelet[2802]: I0904 00:06:11.468232 2802 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:11.469871 kubelet[2802]: I0904 00:06:11.469839 2802 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:11.487418 kubelet[2802]: W0904 00:06:11.486978 2802 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 4 00:06:11.492016 kubelet[2802]: W0904 00:06:11.491973 2802 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 4 00:06:11.497117 kubelet[2802]: W0904 00:06:11.496546 2802 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 4 00:06:11.497117 kubelet[2802]: I0904 00:06:11.496636 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/efc7731598c79bfd3cdb007aebb35a07-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"efc7731598c79bfd3cdb007aebb35a07\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:11.497117 kubelet[2802]: I0904 00:06:11.496688 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/efc7731598c79bfd3cdb007aebb35a07-k8s-certs\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"efc7731598c79bfd3cdb007aebb35a07\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:11.497117 kubelet[2802]: I0904 00:06:11.496730 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/90508dd84af6c483a8ea9a96425e4ca9-kubeconfig\") pod \"kube-scheduler-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"90508dd84af6c483a8ea9a96425e4ca9\") " pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:11.497117 kubelet[2802]: I0904 00:06:11.496782 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce85c2765ba4ec9afe67803f428009b2-ca-certs\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"ce85c2765ba4ec9afe67803f428009b2\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:11.497555 kubelet[2802]: I0904 00:06:11.496847 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce85c2765ba4ec9afe67803f428009b2-k8s-certs\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"ce85c2765ba4ec9afe67803f428009b2\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:11.497555 kubelet[2802]: I0904 00:06:11.496886 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce85c2765ba4ec9afe67803f428009b2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"ce85c2765ba4ec9afe67803f428009b2\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:11.497555 kubelet[2802]: I0904 00:06:11.496926 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/efc7731598c79bfd3cdb007aebb35a07-ca-certs\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"efc7731598c79bfd3cdb007aebb35a07\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:11.497555 kubelet[2802]: I0904 00:06:11.496974 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/efc7731598c79bfd3cdb007aebb35a07-kubeconfig\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"efc7731598c79bfd3cdb007aebb35a07\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:11.497787 kubelet[2802]: I0904 00:06:11.497006 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/efc7731598c79bfd3cdb007aebb35a07-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" (UID: \"efc7731598c79bfd3cdb007aebb35a07\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:12.055927 kubelet[2802]: I0904 00:06:12.055679 2802 apiserver.go:52] "Watching apiserver" Sep 4 00:06:12.093238 kubelet[2802]: I0904 00:06:12.093155 2802 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 00:06:12.282171 kubelet[2802]: I0904 00:06:12.282069 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" podStartSLOduration=1.282040934 podStartE2EDuration="1.282040934s" podCreationTimestamp="2025-09-04 00:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:06:12.279091448 +0000 UTC m=+1.419781708" watchObservedRunningTime="2025-09-04 00:06:12.282040934 +0000 UTC m=+1.422731192" Sep 4 00:06:12.334597 kubelet[2802]: I0904 00:06:12.334281 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" podStartSLOduration=1.334243608 podStartE2EDuration="1.334243608s" podCreationTimestamp="2025-09-04 00:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:06:12.300873369 +0000 UTC m=+1.441563631" watchObservedRunningTime="2025-09-04 00:06:12.334243608 +0000 UTC m=+1.474933869" Sep 4 00:06:12.371652 kubelet[2802]: I0904 00:06:12.370792 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" podStartSLOduration=1.370754039 podStartE2EDuration="1.370754039s" podCreationTimestamp="2025-09-04 00:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:06:12.339511847 +0000 UTC m=+1.480202105" watchObservedRunningTime="2025-09-04 00:06:12.370754039 +0000 UTC m=+1.511444309" Sep 4 00:06:13.995374 kubelet[2802]: I0904 00:06:13.995155 2802 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 00:06:13.997493 containerd[1554]: time="2025-09-04T00:06:13.996569981Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 00:06:13.998021 kubelet[2802]: I0904 00:06:13.997076 2802 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 00:06:14.643353 kubelet[2802]: I0904 00:06:14.642952 2802 status_manager.go:890] "Failed to get status for pod" podUID="f3578a63-79f6-4cb3-96a9-e1deb756a5b5" pod="kube-system/kube-proxy-x77th" err="pods \"kube-proxy-x77th\" is forbidden: User \"system:node:ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' and this object" Sep 4 00:06:14.643353 kubelet[2802]: W0904 00:06:14.642992 2802 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' and this object Sep 4 00:06:14.646483 kubelet[2802]: E0904 00:06:14.643062 2802 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' and this object" logger="UnhandledError" Sep 4 00:06:14.660883 systemd[1]: Created slice kubepods-besteffort-podf3578a63_79f6_4cb3_96a9_e1deb756a5b5.slice - libcontainer container kubepods-besteffort-podf3578a63_79f6_4cb3_96a9_e1deb756a5b5.slice. Sep 4 00:06:14.719596 kubelet[2802]: I0904 00:06:14.719511 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f3578a63-79f6-4cb3-96a9-e1deb756a5b5-kube-proxy\") pod \"kube-proxy-x77th\" (UID: \"f3578a63-79f6-4cb3-96a9-e1deb756a5b5\") " pod="kube-system/kube-proxy-x77th" Sep 4 00:06:14.719596 kubelet[2802]: I0904 00:06:14.719596 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3578a63-79f6-4cb3-96a9-e1deb756a5b5-xtables-lock\") pod \"kube-proxy-x77th\" (UID: \"f3578a63-79f6-4cb3-96a9-e1deb756a5b5\") " pod="kube-system/kube-proxy-x77th" Sep 4 00:06:14.719974 kubelet[2802]: I0904 00:06:14.719632 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3578a63-79f6-4cb3-96a9-e1deb756a5b5-lib-modules\") pod \"kube-proxy-x77th\" (UID: \"f3578a63-79f6-4cb3-96a9-e1deb756a5b5\") " pod="kube-system/kube-proxy-x77th" Sep 4 00:06:14.719974 kubelet[2802]: I0904 00:06:14.719673 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvs4w\" (UniqueName: \"kubernetes.io/projected/f3578a63-79f6-4cb3-96a9-e1deb756a5b5-kube-api-access-jvs4w\") pod \"kube-proxy-x77th\" (UID: \"f3578a63-79f6-4cb3-96a9-e1deb756a5b5\") " pod="kube-system/kube-proxy-x77th" Sep 4 00:06:15.144212 systemd[1]: Created slice kubepods-besteffort-pod37bea06a_71b8_42ab_a7aa_0f3ca1dee906.slice - libcontainer container kubepods-besteffort-pod37bea06a_71b8_42ab_a7aa_0f3ca1dee906.slice. Sep 4 00:06:15.222539 kubelet[2802]: I0904 00:06:15.222473 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvwhn\" (UniqueName: \"kubernetes.io/projected/37bea06a-71b8-42ab-a7aa-0f3ca1dee906-kube-api-access-dvwhn\") pod \"tigera-operator-755d956888-kz5sd\" (UID: \"37bea06a-71b8-42ab-a7aa-0f3ca1dee906\") " pod="tigera-operator/tigera-operator-755d956888-kz5sd" Sep 4 00:06:15.222539 kubelet[2802]: I0904 00:06:15.222551 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/37bea06a-71b8-42ab-a7aa-0f3ca1dee906-var-lib-calico\") pod \"tigera-operator-755d956888-kz5sd\" (UID: \"37bea06a-71b8-42ab-a7aa-0f3ca1dee906\") " pod="tigera-operator/tigera-operator-755d956888-kz5sd" Sep 4 00:06:15.456340 containerd[1554]: time="2025-09-04T00:06:15.455804215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-kz5sd,Uid:37bea06a-71b8-42ab-a7aa-0f3ca1dee906,Namespace:tigera-operator,Attempt:0,}" Sep 4 00:06:15.494088 containerd[1554]: time="2025-09-04T00:06:15.493998393Z" level=info msg="connecting to shim b774ea96fe60ae53a548c18972b3b253c3289a348dbfce74adfd1700acf393f0" address="unix:///run/containerd/s/914c1767d114065ed12f907efb0c2c389badc1dd17ed2b1e28a040be6ff228ab" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:15.548799 systemd[1]: Started cri-containerd-b774ea96fe60ae53a548c18972b3b253c3289a348dbfce74adfd1700acf393f0.scope - libcontainer container b774ea96fe60ae53a548c18972b3b253c3289a348dbfce74adfd1700acf393f0. Sep 4 00:06:15.622702 containerd[1554]: time="2025-09-04T00:06:15.622634313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-kz5sd,Uid:37bea06a-71b8-42ab-a7aa-0f3ca1dee906,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b774ea96fe60ae53a548c18972b3b253c3289a348dbfce74adfd1700acf393f0\"" Sep 4 00:06:15.625585 containerd[1554]: time="2025-09-04T00:06:15.625541339Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 4 00:06:15.880914 containerd[1554]: time="2025-09-04T00:06:15.880848676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x77th,Uid:f3578a63-79f6-4cb3-96a9-e1deb756a5b5,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:15.923879 containerd[1554]: time="2025-09-04T00:06:15.923516835Z" level=info msg="connecting to shim 49e830c77e575804bbdb3d916f86d3cd0c899b8f2f1ff509ab046bbfe7fcc1f2" address="unix:///run/containerd/s/d40c8296b314d568a006f522d75c7520bb26851a4197f9f871cfcea37a8f26d2" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:15.965907 systemd[1]: Started cri-containerd-49e830c77e575804bbdb3d916f86d3cd0c899b8f2f1ff509ab046bbfe7fcc1f2.scope - libcontainer container 49e830c77e575804bbdb3d916f86d3cd0c899b8f2f1ff509ab046bbfe7fcc1f2. Sep 4 00:06:16.029638 containerd[1554]: time="2025-09-04T00:06:16.029117614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x77th,Uid:f3578a63-79f6-4cb3-96a9-e1deb756a5b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"49e830c77e575804bbdb3d916f86d3cd0c899b8f2f1ff509ab046bbfe7fcc1f2\"" Sep 4 00:06:16.037854 containerd[1554]: time="2025-09-04T00:06:16.037798236Z" level=info msg="CreateContainer within sandbox \"49e830c77e575804bbdb3d916f86d3cd0c899b8f2f1ff509ab046bbfe7fcc1f2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 00:06:16.061476 containerd[1554]: time="2025-09-04T00:06:16.059016530Z" level=info msg="Container 84433153126ddf885cd50ca705cc242c2fa87dc81aa7019a0e72e1c3abf7a4fb: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:16.069358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3263245054.mount: Deactivated successfully. Sep 4 00:06:16.081844 containerd[1554]: time="2025-09-04T00:06:16.081783123Z" level=info msg="CreateContainer within sandbox \"49e830c77e575804bbdb3d916f86d3cd0c899b8f2f1ff509ab046bbfe7fcc1f2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"84433153126ddf885cd50ca705cc242c2fa87dc81aa7019a0e72e1c3abf7a4fb\"" Sep 4 00:06:16.083123 containerd[1554]: time="2025-09-04T00:06:16.082940831Z" level=info msg="StartContainer for \"84433153126ddf885cd50ca705cc242c2fa87dc81aa7019a0e72e1c3abf7a4fb\"" Sep 4 00:06:16.086977 containerd[1554]: time="2025-09-04T00:06:16.086918267Z" level=info msg="connecting to shim 84433153126ddf885cd50ca705cc242c2fa87dc81aa7019a0e72e1c3abf7a4fb" address="unix:///run/containerd/s/d40c8296b314d568a006f522d75c7520bb26851a4197f9f871cfcea37a8f26d2" protocol=ttrpc version=3 Sep 4 00:06:16.128801 systemd[1]: Started cri-containerd-84433153126ddf885cd50ca705cc242c2fa87dc81aa7019a0e72e1c3abf7a4fb.scope - libcontainer container 84433153126ddf885cd50ca705cc242c2fa87dc81aa7019a0e72e1c3abf7a4fb. Sep 4 00:06:16.203637 containerd[1554]: time="2025-09-04T00:06:16.203079726Z" level=info msg="StartContainer for \"84433153126ddf885cd50ca705cc242c2fa87dc81aa7019a0e72e1c3abf7a4fb\" returns successfully" Sep 4 00:06:16.262919 kubelet[2802]: I0904 00:06:16.262840 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x77th" podStartSLOduration=2.262801891 podStartE2EDuration="2.262801891s" podCreationTimestamp="2025-09-04 00:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:06:16.26111785 +0000 UTC m=+5.401808113" watchObservedRunningTime="2025-09-04 00:06:16.262801891 +0000 UTC m=+5.403492150" Sep 4 00:06:18.166462 containerd[1554]: time="2025-09-04T00:06:18.166332733Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:18.168595 containerd[1554]: time="2025-09-04T00:06:18.168276484Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 4 00:06:18.170260 containerd[1554]: time="2025-09-04T00:06:18.170145542Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:18.175860 containerd[1554]: time="2025-09-04T00:06:18.175799170Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:18.177218 containerd[1554]: time="2025-09-04T00:06:18.177147493Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.551553802s" Sep 4 00:06:18.177423 containerd[1554]: time="2025-09-04T00:06:18.177392905Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 4 00:06:18.181565 containerd[1554]: time="2025-09-04T00:06:18.181515009Z" level=info msg="CreateContainer within sandbox \"b774ea96fe60ae53a548c18972b3b253c3289a348dbfce74adfd1700acf393f0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 00:06:18.193804 containerd[1554]: time="2025-09-04T00:06:18.193740765Z" level=info msg="Container 2f4ee5e0493b4fd343a075003c71bc40b9327b2ef5b73b28f6b5abdffc6adb79: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:18.211112 containerd[1554]: time="2025-09-04T00:06:18.211033260Z" level=info msg="CreateContainer within sandbox \"b774ea96fe60ae53a548c18972b3b253c3289a348dbfce74adfd1700acf393f0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2f4ee5e0493b4fd343a075003c71bc40b9327b2ef5b73b28f6b5abdffc6adb79\"" Sep 4 00:06:18.212920 containerd[1554]: time="2025-09-04T00:06:18.212416664Z" level=info msg="StartContainer for \"2f4ee5e0493b4fd343a075003c71bc40b9327b2ef5b73b28f6b5abdffc6adb79\"" Sep 4 00:06:18.218647 containerd[1554]: time="2025-09-04T00:06:18.218567781Z" level=info msg="connecting to shim 2f4ee5e0493b4fd343a075003c71bc40b9327b2ef5b73b28f6b5abdffc6adb79" address="unix:///run/containerd/s/914c1767d114065ed12f907efb0c2c389badc1dd17ed2b1e28a040be6ff228ab" protocol=ttrpc version=3 Sep 4 00:06:18.275821 systemd[1]: Started cri-containerd-2f4ee5e0493b4fd343a075003c71bc40b9327b2ef5b73b28f6b5abdffc6adb79.scope - libcontainer container 2f4ee5e0493b4fd343a075003c71bc40b9327b2ef5b73b28f6b5abdffc6adb79. Sep 4 00:06:18.338972 containerd[1554]: time="2025-09-04T00:06:18.338879130Z" level=info msg="StartContainer for \"2f4ee5e0493b4fd343a075003c71bc40b9327b2ef5b73b28f6b5abdffc6adb79\" returns successfully" Sep 4 00:06:19.391461 kubelet[2802]: I0904 00:06:19.391091 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-kz5sd" podStartSLOduration=1.836621086 podStartE2EDuration="4.391058198s" podCreationTimestamp="2025-09-04 00:06:15 +0000 UTC" firstStartedPulling="2025-09-04 00:06:15.624558411 +0000 UTC m=+4.765248655" lastFinishedPulling="2025-09-04 00:06:18.178995514 +0000 UTC m=+7.319685767" observedRunningTime="2025-09-04 00:06:19.290817374 +0000 UTC m=+8.431507634" watchObservedRunningTime="2025-09-04 00:06:19.391058198 +0000 UTC m=+8.531748458" Sep 4 00:06:24.746876 sudo[1862]: pam_unix(sudo:session): session closed for user root Sep 4 00:06:24.795552 sshd[1861]: Connection closed by 147.75.109.163 port 38722 Sep 4 00:06:24.797119 sshd-session[1859]: pam_unix(sshd:session): session closed for user core Sep 4 00:06:24.808809 systemd[1]: sshd@8-10.128.0.26:22-147.75.109.163:38722.service: Deactivated successfully. Sep 4 00:06:24.816168 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 00:06:24.817680 systemd[1]: session-9.scope: Consumed 7.573s CPU time, 229.6M memory peak. Sep 4 00:06:24.824714 systemd-logind[1522]: Session 9 logged out. Waiting for processes to exit. Sep 4 00:06:24.830179 systemd-logind[1522]: Removed session 9. Sep 4 00:06:32.146645 kubelet[2802]: I0904 00:06:32.143137 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjkk8\" (UniqueName: \"kubernetes.io/projected/66a04622-464c-49eb-9b3c-c413e6ad2a2f-kube-api-access-bjkk8\") pod \"calico-typha-67455f697c-4jkr2\" (UID: \"66a04622-464c-49eb-9b3c-c413e6ad2a2f\") " pod="calico-system/calico-typha-67455f697c-4jkr2" Sep 4 00:06:32.146276 systemd[1]: Created slice kubepods-besteffort-pod66a04622_464c_49eb_9b3c_c413e6ad2a2f.slice - libcontainer container kubepods-besteffort-pod66a04622_464c_49eb_9b3c_c413e6ad2a2f.slice. Sep 4 00:06:32.151477 kubelet[2802]: I0904 00:06:32.150616 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/66a04622-464c-49eb-9b3c-c413e6ad2a2f-typha-certs\") pod \"calico-typha-67455f697c-4jkr2\" (UID: \"66a04622-464c-49eb-9b3c-c413e6ad2a2f\") " pod="calico-system/calico-typha-67455f697c-4jkr2" Sep 4 00:06:32.151477 kubelet[2802]: I0904 00:06:32.150697 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66a04622-464c-49eb-9b3c-c413e6ad2a2f-tigera-ca-bundle\") pod \"calico-typha-67455f697c-4jkr2\" (UID: \"66a04622-464c-49eb-9b3c-c413e6ad2a2f\") " pod="calico-system/calico-typha-67455f697c-4jkr2" Sep 4 00:06:32.439089 systemd[1]: Created slice kubepods-besteffort-pod56ae520a_e594_4d29_8a69_860763ceaf34.slice - libcontainer container kubepods-besteffort-pod56ae520a_e594_4d29_8a69_860763ceaf34.slice. Sep 4 00:06:32.454087 kubelet[2802]: I0904 00:06:32.453805 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/56ae520a-e594-4d29-8a69-860763ceaf34-policysync\") pod \"calico-node-2cjq7\" (UID: \"56ae520a-e594-4d29-8a69-860763ceaf34\") " pod="calico-system/calico-node-2cjq7" Sep 4 00:06:32.454654 kubelet[2802]: I0904 00:06:32.454519 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56ae520a-e594-4d29-8a69-860763ceaf34-tigera-ca-bundle\") pod \"calico-node-2cjq7\" (UID: \"56ae520a-e594-4d29-8a69-860763ceaf34\") " pod="calico-system/calico-node-2cjq7" Sep 4 00:06:32.455071 kubelet[2802]: I0904 00:06:32.455026 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8558\" (UniqueName: \"kubernetes.io/projected/56ae520a-e594-4d29-8a69-860763ceaf34-kube-api-access-c8558\") pod \"calico-node-2cjq7\" (UID: \"56ae520a-e594-4d29-8a69-860763ceaf34\") " pod="calico-system/calico-node-2cjq7" Sep 4 00:06:32.455528 kubelet[2802]: I0904 00:06:32.455419 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/56ae520a-e594-4d29-8a69-860763ceaf34-var-lib-calico\") pod \"calico-node-2cjq7\" (UID: \"56ae520a-e594-4d29-8a69-860763ceaf34\") " pod="calico-system/calico-node-2cjq7" Sep 4 00:06:32.455899 kubelet[2802]: I0904 00:06:32.455873 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/56ae520a-e594-4d29-8a69-860763ceaf34-cni-log-dir\") pod \"calico-node-2cjq7\" (UID: \"56ae520a-e594-4d29-8a69-860763ceaf34\") " pod="calico-system/calico-node-2cjq7" Sep 4 00:06:32.456195 kubelet[2802]: I0904 00:06:32.456170 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56ae520a-e594-4d29-8a69-860763ceaf34-xtables-lock\") pod \"calico-node-2cjq7\" (UID: \"56ae520a-e594-4d29-8a69-860763ceaf34\") " pod="calico-system/calico-node-2cjq7" Sep 4 00:06:32.456545 kubelet[2802]: I0904 00:06:32.456495 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56ae520a-e594-4d29-8a69-860763ceaf34-lib-modules\") pod \"calico-node-2cjq7\" (UID: \"56ae520a-e594-4d29-8a69-860763ceaf34\") " pod="calico-system/calico-node-2cjq7" Sep 4 00:06:32.457287 kubelet[2802]: I0904 00:06:32.456774 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/56ae520a-e594-4d29-8a69-860763ceaf34-node-certs\") pod \"calico-node-2cjq7\" (UID: \"56ae520a-e594-4d29-8a69-860763ceaf34\") " pod="calico-system/calico-node-2cjq7" Sep 4 00:06:32.457287 kubelet[2802]: I0904 00:06:32.456817 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/56ae520a-e594-4d29-8a69-860763ceaf34-flexvol-driver-host\") pod \"calico-node-2cjq7\" (UID: \"56ae520a-e594-4d29-8a69-860763ceaf34\") " pod="calico-system/calico-node-2cjq7" Sep 4 00:06:32.457287 kubelet[2802]: I0904 00:06:32.456857 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/56ae520a-e594-4d29-8a69-860763ceaf34-cni-net-dir\") pod \"calico-node-2cjq7\" (UID: \"56ae520a-e594-4d29-8a69-860763ceaf34\") " pod="calico-system/calico-node-2cjq7" Sep 4 00:06:32.457287 kubelet[2802]: I0904 00:06:32.456922 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/56ae520a-e594-4d29-8a69-860763ceaf34-cni-bin-dir\") pod \"calico-node-2cjq7\" (UID: \"56ae520a-e594-4d29-8a69-860763ceaf34\") " pod="calico-system/calico-node-2cjq7" Sep 4 00:06:32.457287 kubelet[2802]: I0904 00:06:32.456995 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/56ae520a-e594-4d29-8a69-860763ceaf34-var-run-calico\") pod \"calico-node-2cjq7\" (UID: \"56ae520a-e594-4d29-8a69-860763ceaf34\") " pod="calico-system/calico-node-2cjq7" Sep 4 00:06:32.459615 containerd[1554]: time="2025-09-04T00:06:32.459299448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67455f697c-4jkr2,Uid:66a04622-464c-49eb-9b3c-c413e6ad2a2f,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:32.525748 containerd[1554]: time="2025-09-04T00:06:32.525649466Z" level=info msg="connecting to shim 479f4c2135748634a5ccbfda0af140aad803478c3150fdb30377e6e95396f047" address="unix:///run/containerd/s/69c28a34df909f847f1fc5c336e6ef7056663259725a43d525cacd6915c7d19e" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:32.572233 kubelet[2802]: E0904 00:06:32.571763 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.572233 kubelet[2802]: W0904 00:06:32.571798 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.572233 kubelet[2802]: E0904 00:06:32.571868 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.573723 kubelet[2802]: E0904 00:06:32.573527 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.573723 kubelet[2802]: W0904 00:06:32.573585 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.574114 kubelet[2802]: E0904 00:06:32.573614 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.603937 kubelet[2802]: E0904 00:06:32.603831 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.603937 kubelet[2802]: W0904 00:06:32.603874 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.603937 kubelet[2802]: E0904 00:06:32.603919 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.621299 systemd[1]: Started cri-containerd-479f4c2135748634a5ccbfda0af140aad803478c3150fdb30377e6e95396f047.scope - libcontainer container 479f4c2135748634a5ccbfda0af140aad803478c3150fdb30377e6e95396f047. Sep 4 00:06:32.634878 kubelet[2802]: E0904 00:06:32.634824 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.634878 kubelet[2802]: W0904 00:06:32.634871 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.635232 kubelet[2802]: E0904 00:06:32.634915 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.696684 kubelet[2802]: E0904 00:06:32.696409 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-879lm" podUID="b4c73824-aa33-4959-8741-865f24cc1aca" Sep 4 00:06:32.743700 kubelet[2802]: E0904 00:06:32.743637 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.744455 kubelet[2802]: W0904 00:06:32.744083 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.744455 kubelet[2802]: E0904 00:06:32.744162 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.745677 kubelet[2802]: E0904 00:06:32.745642 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.745907 kubelet[2802]: W0904 00:06:32.745772 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.746354 kubelet[2802]: E0904 00:06:32.745807 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.747016 kubelet[2802]: E0904 00:06:32.746950 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.747340 kubelet[2802]: W0904 00:06:32.747162 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.747340 kubelet[2802]: E0904 00:06:32.747193 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.749072 kubelet[2802]: E0904 00:06:32.748094 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.749246 kubelet[2802]: W0904 00:06:32.749219 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.749352 kubelet[2802]: E0904 00:06:32.749335 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.749958 kubelet[2802]: E0904 00:06:32.749833 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.749958 kubelet[2802]: W0904 00:06:32.749852 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.749958 kubelet[2802]: E0904 00:06:32.749890 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.750467 kubelet[2802]: E0904 00:06:32.750414 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.750694 kubelet[2802]: W0904 00:06:32.750562 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.750694 kubelet[2802]: E0904 00:06:32.750586 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.751705 kubelet[2802]: E0904 00:06:32.751640 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.752119 kubelet[2802]: W0904 00:06:32.751983 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.752119 kubelet[2802]: E0904 00:06:32.752030 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.752730 kubelet[2802]: E0904 00:06:32.752709 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.753209 kubelet[2802]: W0904 00:06:32.752854 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.753209 kubelet[2802]: E0904 00:06:32.752878 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.754657 kubelet[2802]: E0904 00:06:32.754516 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.754657 kubelet[2802]: W0904 00:06:32.754536 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.754995 kubelet[2802]: E0904 00:06:32.754836 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.755591 kubelet[2802]: E0904 00:06:32.755513 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.755591 kubelet[2802]: W0904 00:06:32.755532 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.755997 kubelet[2802]: E0904 00:06:32.755845 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.756226 kubelet[2802]: E0904 00:06:32.756201 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.756354 kubelet[2802]: W0904 00:06:32.756225 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.756354 kubelet[2802]: E0904 00:06:32.756243 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.756678 kubelet[2802]: E0904 00:06:32.756648 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.756678 kubelet[2802]: W0904 00:06:32.756680 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.756678 kubelet[2802]: E0904 00:06:32.756698 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.758568 kubelet[2802]: E0904 00:06:32.758537 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.758677 kubelet[2802]: W0904 00:06:32.758569 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.758677 kubelet[2802]: E0904 00:06:32.758590 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.759474 kubelet[2802]: E0904 00:06:32.758904 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.759474 kubelet[2802]: W0904 00:06:32.758933 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.759474 kubelet[2802]: E0904 00:06:32.758956 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.760805 kubelet[2802]: E0904 00:06:32.760780 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.760805 kubelet[2802]: W0904 00:06:32.760805 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.760805 kubelet[2802]: E0904 00:06:32.760824 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.761163 kubelet[2802]: E0904 00:06:32.761141 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.761163 kubelet[2802]: W0904 00:06:32.761163 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.761163 kubelet[2802]: E0904 00:06:32.761181 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.761829 kubelet[2802]: E0904 00:06:32.761798 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.762074 kubelet[2802]: W0904 00:06:32.761919 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.762074 kubelet[2802]: E0904 00:06:32.761946 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.762521 kubelet[2802]: E0904 00:06:32.762502 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.762726 kubelet[2802]: W0904 00:06:32.762648 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.762726 kubelet[2802]: E0904 00:06:32.762675 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.763352 kubelet[2802]: E0904 00:06:32.763334 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.763559 kubelet[2802]: W0904 00:06:32.763485 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.763869 kubelet[2802]: E0904 00:06:32.763672 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.764337 containerd[1554]: time="2025-09-04T00:06:32.764284931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2cjq7,Uid:56ae520a-e594-4d29-8a69-860763ceaf34,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:32.764690 kubelet[2802]: E0904 00:06:32.764614 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.764690 kubelet[2802]: W0904 00:06:32.764638 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.765154 kubelet[2802]: E0904 00:06:32.764657 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.767863 kubelet[2802]: E0904 00:06:32.767829 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.767863 kubelet[2802]: W0904 00:06:32.767862 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.768626 kubelet[2802]: E0904 00:06:32.767885 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.769179 kubelet[2802]: I0904 00:06:32.768337 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qggfh\" (UniqueName: \"kubernetes.io/projected/b4c73824-aa33-4959-8741-865f24cc1aca-kube-api-access-qggfh\") pod \"csi-node-driver-879lm\" (UID: \"b4c73824-aa33-4959-8741-865f24cc1aca\") " pod="calico-system/csi-node-driver-879lm" Sep 4 00:06:32.770655 kubelet[2802]: E0904 00:06:32.770528 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.770655 kubelet[2802]: W0904 00:06:32.770628 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.770655 kubelet[2802]: E0904 00:06:32.770649 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.771393 kubelet[2802]: E0904 00:06:32.771297 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.771393 kubelet[2802]: W0904 00:06:32.771322 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.772671 kubelet[2802]: E0904 00:06:32.772636 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.773615 kubelet[2802]: E0904 00:06:32.773586 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.773615 kubelet[2802]: W0904 00:06:32.773615 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.773755 kubelet[2802]: E0904 00:06:32.773636 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.774868 kubelet[2802]: I0904 00:06:32.774525 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b4c73824-aa33-4959-8741-865f24cc1aca-varrun\") pod \"csi-node-driver-879lm\" (UID: \"b4c73824-aa33-4959-8741-865f24cc1aca\") " pod="calico-system/csi-node-driver-879lm" Sep 4 00:06:32.776694 kubelet[2802]: E0904 00:06:32.776655 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.776799 kubelet[2802]: W0904 00:06:32.776694 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.776799 kubelet[2802]: E0904 00:06:32.776747 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.778623 kubelet[2802]: E0904 00:06:32.778590 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.778752 kubelet[2802]: W0904 00:06:32.778621 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.779173 kubelet[2802]: E0904 00:06:32.778966 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.780028 kubelet[2802]: E0904 00:06:32.780001 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.780028 kubelet[2802]: W0904 00:06:32.780027 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.780180 kubelet[2802]: E0904 00:06:32.780049 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.781920 kubelet[2802]: I0904 00:06:32.781810 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4c73824-aa33-4959-8741-865f24cc1aca-kubelet-dir\") pod \"csi-node-driver-879lm\" (UID: \"b4c73824-aa33-4959-8741-865f24cc1aca\") " pod="calico-system/csi-node-driver-879lm" Sep 4 00:06:32.782045 kubelet[2802]: E0904 00:06:32.781964 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.782045 kubelet[2802]: W0904 00:06:32.781985 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.782045 kubelet[2802]: E0904 00:06:32.782012 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.783738 kubelet[2802]: E0904 00:06:32.783537 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.783738 kubelet[2802]: W0904 00:06:32.783560 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.783738 kubelet[2802]: E0904 00:06:32.783587 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.784484 kubelet[2802]: E0904 00:06:32.784227 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.784484 kubelet[2802]: W0904 00:06:32.784245 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.784484 kubelet[2802]: E0904 00:06:32.784262 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.784484 kubelet[2802]: I0904 00:06:32.784296 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b4c73824-aa33-4959-8741-865f24cc1aca-registration-dir\") pod \"csi-node-driver-879lm\" (UID: \"b4c73824-aa33-4959-8741-865f24cc1aca\") " pod="calico-system/csi-node-driver-879lm" Sep 4 00:06:32.785019 kubelet[2802]: E0904 00:06:32.784999 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.785350 kubelet[2802]: W0904 00:06:32.785110 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.785350 kubelet[2802]: E0904 00:06:32.785135 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.785350 kubelet[2802]: I0904 00:06:32.785165 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b4c73824-aa33-4959-8741-865f24cc1aca-socket-dir\") pod \"csi-node-driver-879lm\" (UID: \"b4c73824-aa33-4959-8741-865f24cc1aca\") " pod="calico-system/csi-node-driver-879lm" Sep 4 00:06:32.786101 kubelet[2802]: E0904 00:06:32.786079 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.786506 kubelet[2802]: W0904 00:06:32.786482 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.787116 kubelet[2802]: E0904 00:06:32.786614 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.787116 kubelet[2802]: E0904 00:06:32.787011 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.787116 kubelet[2802]: W0904 00:06:32.787025 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.788187 kubelet[2802]: E0904 00:06:32.788091 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.789849 kubelet[2802]: E0904 00:06:32.789679 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.789849 kubelet[2802]: W0904 00:06:32.789701 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.789849 kubelet[2802]: E0904 00:06:32.789722 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.791941 kubelet[2802]: E0904 00:06:32.791919 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.792547 kubelet[2802]: W0904 00:06:32.792136 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.792547 kubelet[2802]: E0904 00:06:32.792473 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.829136 containerd[1554]: time="2025-09-04T00:06:32.829041910Z" level=info msg="connecting to shim 26f2636fdc9397ea4a214b0e3e90718ebd0774cd8278545515d493c6b703c43c" address="unix:///run/containerd/s/3980a1736c4a62287dfdd0bf13cf50ce886f483d43b01506d2fbb3975dcaf009" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:32.886852 kubelet[2802]: E0904 00:06:32.886728 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.886852 kubelet[2802]: W0904 00:06:32.886766 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.886852 kubelet[2802]: E0904 00:06:32.886806 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.891053 kubelet[2802]: E0904 00:06:32.890806 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.891053 kubelet[2802]: W0904 00:06:32.890848 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.891053 kubelet[2802]: E0904 00:06:32.890917 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.893148 kubelet[2802]: E0904 00:06:32.892755 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.893148 kubelet[2802]: W0904 00:06:32.892787 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.893148 kubelet[2802]: E0904 00:06:32.892839 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.894682 kubelet[2802]: E0904 00:06:32.894650 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.895541 kubelet[2802]: W0904 00:06:32.895317 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.896260 kubelet[2802]: E0904 00:06:32.895418 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.897123 kubelet[2802]: E0904 00:06:32.897033 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.897123 kubelet[2802]: W0904 00:06:32.897077 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.899670 kubelet[2802]: E0904 00:06:32.899522 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.902422 kubelet[2802]: E0904 00:06:32.901604 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.902422 kubelet[2802]: W0904 00:06:32.901633 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.902422 kubelet[2802]: E0904 00:06:32.901674 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.909465 kubelet[2802]: E0904 00:06:32.909240 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.909465 kubelet[2802]: W0904 00:06:32.909282 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.909465 kubelet[2802]: E0904 00:06:32.909323 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.913702 kubelet[2802]: E0904 00:06:32.913623 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.913702 kubelet[2802]: W0904 00:06:32.913680 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.914042 kubelet[2802]: E0904 00:06:32.913741 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.916042 systemd[1]: Started cri-containerd-26f2636fdc9397ea4a214b0e3e90718ebd0774cd8278545515d493c6b703c43c.scope - libcontainer container 26f2636fdc9397ea4a214b0e3e90718ebd0774cd8278545515d493c6b703c43c. Sep 4 00:06:32.919663 kubelet[2802]: E0904 00:06:32.919615 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.919663 kubelet[2802]: W0904 00:06:32.919662 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.919921 kubelet[2802]: E0904 00:06:32.919732 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.922655 kubelet[2802]: E0904 00:06:32.922597 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.922655 kubelet[2802]: W0904 00:06:32.922648 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.922920 kubelet[2802]: E0904 00:06:32.922692 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.923858 kubelet[2802]: E0904 00:06:32.923831 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.924164 kubelet[2802]: W0904 00:06:32.924072 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.924367 kubelet[2802]: E0904 00:06:32.924112 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.927812 kubelet[2802]: E0904 00:06:32.927746 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.929068 kubelet[2802]: W0904 00:06:32.928275 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.929068 kubelet[2802]: E0904 00:06:32.928340 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.930115 kubelet[2802]: E0904 00:06:32.929858 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.930115 kubelet[2802]: W0904 00:06:32.929883 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.930115 kubelet[2802]: E0904 00:06:32.929911 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.930771 kubelet[2802]: E0904 00:06:32.930750 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.931214 kubelet[2802]: W0904 00:06:32.930982 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.931566 kubelet[2802]: E0904 00:06:32.931342 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.932400 kubelet[2802]: E0904 00:06:32.932267 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.932400 kubelet[2802]: W0904 00:06:32.932286 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.932400 kubelet[2802]: E0904 00:06:32.932364 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.933626 kubelet[2802]: E0904 00:06:32.933486 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.933626 kubelet[2802]: W0904 00:06:32.933509 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.934489 kubelet[2802]: E0904 00:06:32.934326 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.934489 kubelet[2802]: W0904 00:06:32.934356 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.935446 kubelet[2802]: E0904 00:06:32.935375 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.935446 kubelet[2802]: W0904 00:06:32.935404 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.935688 kubelet[2802]: E0904 00:06:32.935488 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.935688 kubelet[2802]: E0904 00:06:32.935541 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.935939 kubelet[2802]: E0904 00:06:32.935854 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.937904 kubelet[2802]: E0904 00:06:32.937796 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.937904 kubelet[2802]: W0904 00:06:32.937872 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.937904 kubelet[2802]: E0904 00:06:32.937909 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.940067 kubelet[2802]: E0904 00:06:32.940029 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.940067 kubelet[2802]: W0904 00:06:32.940068 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.940734 kubelet[2802]: E0904 00:06:32.940127 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.941627 kubelet[2802]: E0904 00:06:32.941588 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.941627 kubelet[2802]: W0904 00:06:32.941626 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.942036 kubelet[2802]: E0904 00:06:32.941823 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.942229 kubelet[2802]: E0904 00:06:32.942212 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.942326 kubelet[2802]: W0904 00:06:32.942310 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.942480 kubelet[2802]: E0904 00:06:32.942461 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.943264 kubelet[2802]: E0904 00:06:32.943217 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.943264 kubelet[2802]: W0904 00:06:32.943238 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.943664 kubelet[2802]: E0904 00:06:32.943598 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.944045 kubelet[2802]: E0904 00:06:32.944028 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.944166 kubelet[2802]: W0904 00:06:32.944127 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.944586 kubelet[2802]: E0904 00:06:32.944541 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.945817 kubelet[2802]: E0904 00:06:32.945756 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.946050 kubelet[2802]: W0904 00:06:32.945953 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.946050 kubelet[2802]: E0904 00:06:32.945998 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:32.997052 kubelet[2802]: E0904 00:06:32.996786 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:32.997052 kubelet[2802]: W0904 00:06:32.996827 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:32.997052 kubelet[2802]: E0904 00:06:32.996864 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:33.106260 containerd[1554]: time="2025-09-04T00:06:33.106032239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2cjq7,Uid:56ae520a-e594-4d29-8a69-860763ceaf34,Namespace:calico-system,Attempt:0,} returns sandbox id \"26f2636fdc9397ea4a214b0e3e90718ebd0774cd8278545515d493c6b703c43c\"" Sep 4 00:06:33.164258 containerd[1554]: time="2025-09-04T00:06:33.163783781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 4 00:06:33.215271 containerd[1554]: time="2025-09-04T00:06:33.215129994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67455f697c-4jkr2,Uid:66a04622-464c-49eb-9b3c-c413e6ad2a2f,Namespace:calico-system,Attempt:0,} returns sandbox id \"479f4c2135748634a5ccbfda0af140aad803478c3150fdb30377e6e95396f047\"" Sep 4 00:06:34.160945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4280102914.mount: Deactivated successfully. Sep 4 00:06:34.166611 kubelet[2802]: E0904 00:06:34.166094 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-879lm" podUID="b4c73824-aa33-4959-8741-865f24cc1aca" Sep 4 00:06:34.353462 containerd[1554]: time="2025-09-04T00:06:34.352941770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:34.358618 containerd[1554]: time="2025-09-04T00:06:34.358536609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 4 00:06:34.360765 containerd[1554]: time="2025-09-04T00:06:34.360640624Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:34.365717 containerd[1554]: time="2025-09-04T00:06:34.365622946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:34.368111 containerd[1554]: time="2025-09-04T00:06:34.367883089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.204027595s" Sep 4 00:06:34.368111 containerd[1554]: time="2025-09-04T00:06:34.367948803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 4 00:06:34.370233 containerd[1554]: time="2025-09-04T00:06:34.370182814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 4 00:06:34.378381 containerd[1554]: time="2025-09-04T00:06:34.378302490Z" level=info msg="CreateContainer within sandbox \"26f2636fdc9397ea4a214b0e3e90718ebd0774cd8278545515d493c6b703c43c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 00:06:34.404744 containerd[1554]: time="2025-09-04T00:06:34.404576164Z" level=info msg="Container d15de656f30db8345bca3dd1f62d6c0b7c43e1127f2326dfd3fead1d787a307e: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:34.421562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1259594960.mount: Deactivated successfully. Sep 4 00:06:34.439611 containerd[1554]: time="2025-09-04T00:06:34.439542815Z" level=info msg="CreateContainer within sandbox \"26f2636fdc9397ea4a214b0e3e90718ebd0774cd8278545515d493c6b703c43c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d15de656f30db8345bca3dd1f62d6c0b7c43e1127f2326dfd3fead1d787a307e\"" Sep 4 00:06:34.442569 containerd[1554]: time="2025-09-04T00:06:34.440745292Z" level=info msg="StartContainer for \"d15de656f30db8345bca3dd1f62d6c0b7c43e1127f2326dfd3fead1d787a307e\"" Sep 4 00:06:34.444795 containerd[1554]: time="2025-09-04T00:06:34.444735832Z" level=info msg="connecting to shim d15de656f30db8345bca3dd1f62d6c0b7c43e1127f2326dfd3fead1d787a307e" address="unix:///run/containerd/s/3980a1736c4a62287dfdd0bf13cf50ce886f483d43b01506d2fbb3975dcaf009" protocol=ttrpc version=3 Sep 4 00:06:34.505162 systemd[1]: Started cri-containerd-d15de656f30db8345bca3dd1f62d6c0b7c43e1127f2326dfd3fead1d787a307e.scope - libcontainer container d15de656f30db8345bca3dd1f62d6c0b7c43e1127f2326dfd3fead1d787a307e. Sep 4 00:06:34.619181 containerd[1554]: time="2025-09-04T00:06:34.619103796Z" level=info msg="StartContainer for \"d15de656f30db8345bca3dd1f62d6c0b7c43e1127f2326dfd3fead1d787a307e\" returns successfully" Sep 4 00:06:34.644360 systemd[1]: cri-containerd-d15de656f30db8345bca3dd1f62d6c0b7c43e1127f2326dfd3fead1d787a307e.scope: Deactivated successfully. Sep 4 00:06:34.650614 containerd[1554]: time="2025-09-04T00:06:34.650008433Z" level=info msg="received exit event container_id:\"d15de656f30db8345bca3dd1f62d6c0b7c43e1127f2326dfd3fead1d787a307e\" id:\"d15de656f30db8345bca3dd1f62d6c0b7c43e1127f2326dfd3fead1d787a307e\" pid:3399 exited_at:{seconds:1756944394 nanos:648264056}" Sep 4 00:06:34.651095 containerd[1554]: time="2025-09-04T00:06:34.650374377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d15de656f30db8345bca3dd1f62d6c0b7c43e1127f2326dfd3fead1d787a307e\" id:\"d15de656f30db8345bca3dd1f62d6c0b7c43e1127f2326dfd3fead1d787a307e\" pid:3399 exited_at:{seconds:1756944394 nanos:648264056}" Sep 4 00:06:34.704499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d15de656f30db8345bca3dd1f62d6c0b7c43e1127f2326dfd3fead1d787a307e-rootfs.mount: Deactivated successfully. Sep 4 00:06:36.164989 kubelet[2802]: E0904 00:06:36.164892 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-879lm" podUID="b4c73824-aa33-4959-8741-865f24cc1aca" Sep 4 00:06:37.700018 containerd[1554]: time="2025-09-04T00:06:37.699900196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:37.701624 containerd[1554]: time="2025-09-04T00:06:37.701470938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 4 00:06:37.703822 containerd[1554]: time="2025-09-04T00:06:37.703772425Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:37.708461 containerd[1554]: time="2025-09-04T00:06:37.708159513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:37.710505 containerd[1554]: time="2025-09-04T00:06:37.710455932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.339980956s" Sep 4 00:06:37.710655 containerd[1554]: time="2025-09-04T00:06:37.710534582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 4 00:06:37.715663 containerd[1554]: time="2025-09-04T00:06:37.715561292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 4 00:06:37.746391 containerd[1554]: time="2025-09-04T00:06:37.746300707Z" level=info msg="CreateContainer within sandbox \"479f4c2135748634a5ccbfda0af140aad803478c3150fdb30377e6e95396f047\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 00:06:37.763468 containerd[1554]: time="2025-09-04T00:06:37.760661028Z" level=info msg="Container 8bd7db6eeefacb1d12800fd2688ff4ff55971214f0b53fa180c7c15848511cc4: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:37.777869 containerd[1554]: time="2025-09-04T00:06:37.777730929Z" level=info msg="CreateContainer within sandbox \"479f4c2135748634a5ccbfda0af140aad803478c3150fdb30377e6e95396f047\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8bd7db6eeefacb1d12800fd2688ff4ff55971214f0b53fa180c7c15848511cc4\"" Sep 4 00:06:37.779199 containerd[1554]: time="2025-09-04T00:06:37.778779998Z" level=info msg="StartContainer for \"8bd7db6eeefacb1d12800fd2688ff4ff55971214f0b53fa180c7c15848511cc4\"" Sep 4 00:06:37.781188 containerd[1554]: time="2025-09-04T00:06:37.781135887Z" level=info msg="connecting to shim 8bd7db6eeefacb1d12800fd2688ff4ff55971214f0b53fa180c7c15848511cc4" address="unix:///run/containerd/s/69c28a34df909f847f1fc5c336e6ef7056663259725a43d525cacd6915c7d19e" protocol=ttrpc version=3 Sep 4 00:06:37.823944 systemd[1]: Started cri-containerd-8bd7db6eeefacb1d12800fd2688ff4ff55971214f0b53fa180c7c15848511cc4.scope - libcontainer container 8bd7db6eeefacb1d12800fd2688ff4ff55971214f0b53fa180c7c15848511cc4. Sep 4 00:06:37.948978 containerd[1554]: time="2025-09-04T00:06:37.948910334Z" level=info msg="StartContainer for \"8bd7db6eeefacb1d12800fd2688ff4ff55971214f0b53fa180c7c15848511cc4\" returns successfully" Sep 4 00:06:38.167084 kubelet[2802]: E0904 00:06:38.167002 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-879lm" podUID="b4c73824-aa33-4959-8741-865f24cc1aca" Sep 4 00:06:39.378346 kubelet[2802]: I0904 00:06:39.377600 2802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:06:40.165454 kubelet[2802]: E0904 00:06:40.165273 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-879lm" podUID="b4c73824-aa33-4959-8741-865f24cc1aca" Sep 4 00:06:41.615939 containerd[1554]: time="2025-09-04T00:06:41.615818606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:41.618513 containerd[1554]: time="2025-09-04T00:06:41.618164313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 4 00:06:41.620314 containerd[1554]: time="2025-09-04T00:06:41.620262213Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:41.624223 containerd[1554]: time="2025-09-04T00:06:41.624159175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:41.625819 containerd[1554]: time="2025-09-04T00:06:41.625768605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.910141049s" Sep 4 00:06:41.626166 containerd[1554]: time="2025-09-04T00:06:41.626003668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 4 00:06:41.630695 containerd[1554]: time="2025-09-04T00:06:41.630570479Z" level=info msg="CreateContainer within sandbox \"26f2636fdc9397ea4a214b0e3e90718ebd0774cd8278545515d493c6b703c43c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 00:06:41.647995 containerd[1554]: time="2025-09-04T00:06:41.647772749Z" level=info msg="Container 1d8a3f8334500f8d48bc44a524d2e1436b7eb351d4dab61586b50623c0bb7f1b: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:41.669960 containerd[1554]: time="2025-09-04T00:06:41.669868588Z" level=info msg="CreateContainer within sandbox \"26f2636fdc9397ea4a214b0e3e90718ebd0774cd8278545515d493c6b703c43c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1d8a3f8334500f8d48bc44a524d2e1436b7eb351d4dab61586b50623c0bb7f1b\"" Sep 4 00:06:41.671715 containerd[1554]: time="2025-09-04T00:06:41.671645054Z" level=info msg="StartContainer for \"1d8a3f8334500f8d48bc44a524d2e1436b7eb351d4dab61586b50623c0bb7f1b\"" Sep 4 00:06:41.674292 containerd[1554]: time="2025-09-04T00:06:41.674159155Z" level=info msg="connecting to shim 1d8a3f8334500f8d48bc44a524d2e1436b7eb351d4dab61586b50623c0bb7f1b" address="unix:///run/containerd/s/3980a1736c4a62287dfdd0bf13cf50ce886f483d43b01506d2fbb3975dcaf009" protocol=ttrpc version=3 Sep 4 00:06:41.734411 systemd[1]: Started cri-containerd-1d8a3f8334500f8d48bc44a524d2e1436b7eb351d4dab61586b50623c0bb7f1b.scope - libcontainer container 1d8a3f8334500f8d48bc44a524d2e1436b7eb351d4dab61586b50623c0bb7f1b. Sep 4 00:06:41.826327 containerd[1554]: time="2025-09-04T00:06:41.826124771Z" level=info msg="StartContainer for \"1d8a3f8334500f8d48bc44a524d2e1436b7eb351d4dab61586b50623c0bb7f1b\" returns successfully" Sep 4 00:06:42.165383 kubelet[2802]: E0904 00:06:42.165074 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-879lm" podUID="b4c73824-aa33-4959-8741-865f24cc1aca" Sep 4 00:06:42.444298 kubelet[2802]: I0904 00:06:42.443895 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67455f697c-4jkr2" podStartSLOduration=5.953439925 podStartE2EDuration="10.443858749s" podCreationTimestamp="2025-09-04 00:06:32 +0000 UTC" firstStartedPulling="2025-09-04 00:06:33.22218889 +0000 UTC m=+22.362879142" lastFinishedPulling="2025-09-04 00:06:37.712607724 +0000 UTC m=+26.853297966" observedRunningTime="2025-09-04 00:06:38.562818928 +0000 UTC m=+27.703509189" watchObservedRunningTime="2025-09-04 00:06:42.443858749 +0000 UTC m=+31.584549009" Sep 4 00:06:43.108993 containerd[1554]: time="2025-09-04T00:06:43.108898479Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:06:43.113659 systemd[1]: cri-containerd-1d8a3f8334500f8d48bc44a524d2e1436b7eb351d4dab61586b50623c0bb7f1b.scope: Deactivated successfully. Sep 4 00:06:43.114146 systemd[1]: cri-containerd-1d8a3f8334500f8d48bc44a524d2e1436b7eb351d4dab61586b50623c0bb7f1b.scope: Consumed 922ms CPU time, 192.7M memory peak, 171.3M written to disk. Sep 4 00:06:43.119053 containerd[1554]: time="2025-09-04T00:06:43.118986975Z" level=info msg="received exit event container_id:\"1d8a3f8334500f8d48bc44a524d2e1436b7eb351d4dab61586b50623c0bb7f1b\" id:\"1d8a3f8334500f8d48bc44a524d2e1436b7eb351d4dab61586b50623c0bb7f1b\" pid:3500 exited_at:{seconds:1756944403 nanos:117997219}" Sep 4 00:06:43.119677 containerd[1554]: time="2025-09-04T00:06:43.119375739Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d8a3f8334500f8d48bc44a524d2e1436b7eb351d4dab61586b50623c0bb7f1b\" id:\"1d8a3f8334500f8d48bc44a524d2e1436b7eb351d4dab61586b50623c0bb7f1b\" pid:3500 exited_at:{seconds:1756944403 nanos:117997219}" Sep 4 00:06:43.143593 kubelet[2802]: I0904 00:06:43.143489 2802 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 00:06:43.213036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d8a3f8334500f8d48bc44a524d2e1436b7eb351d4dab61586b50623c0bb7f1b-rootfs.mount: Deactivated successfully. Sep 4 00:06:43.274257 systemd[1]: Created slice kubepods-burstable-pod11e53487_d6bc_496a_8cbc_9ef48e546387.slice - libcontainer container kubepods-burstable-pod11e53487_d6bc_496a_8cbc_9ef48e546387.slice. Sep 4 00:06:43.296886 kubelet[2802]: I0904 00:06:43.296807 2802 status_manager.go:890] "Failed to get status for pod" podUID="11e53487-d6bc-496a-8cbc-9ef48e546387" pod="kube-system/coredns-668d6bf9bc-vxgnf" err="pods \"coredns-668d6bf9bc-vxgnf\" is forbidden: User \"system:node:ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' and this object" Sep 4 00:06:43.297693 kubelet[2802]: W0904 00:06:43.297120 2802 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' and this object Sep 4 00:06:43.297693 kubelet[2802]: E0904 00:06:43.297180 2802 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' and this object" logger="UnhandledError" Sep 4 00:06:43.321123 kubelet[2802]: I0904 00:06:43.321042 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stv8n\" (UniqueName: \"kubernetes.io/projected/11e53487-d6bc-496a-8cbc-9ef48e546387-kube-api-access-stv8n\") pod \"coredns-668d6bf9bc-vxgnf\" (UID: \"11e53487-d6bc-496a-8cbc-9ef48e546387\") " pod="kube-system/coredns-668d6bf9bc-vxgnf" Sep 4 00:06:43.321123 kubelet[2802]: I0904 00:06:43.321135 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11e53487-d6bc-496a-8cbc-9ef48e546387-config-volume\") pod \"coredns-668d6bf9bc-vxgnf\" (UID: \"11e53487-d6bc-496a-8cbc-9ef48e546387\") " pod="kube-system/coredns-668d6bf9bc-vxgnf" Sep 4 00:06:43.338198 systemd[1]: Created slice kubepods-burstable-pod6725aa5b_2060_4c41_8f5a_a3dc25045bde.slice - libcontainer container kubepods-burstable-pod6725aa5b_2060_4c41_8f5a_a3dc25045bde.slice. Sep 4 00:06:43.371359 systemd[1]: Created slice kubepods-besteffort-podf5be82fe_fe0e_4fa6_90ff_b3a883fedcc3.slice - libcontainer container kubepods-besteffort-podf5be82fe_fe0e_4fa6_90ff_b3a883fedcc3.slice. Sep 4 00:06:43.415821 systemd[1]: Created slice kubepods-besteffort-podbd7f54d6_d680_45c7_b829_05311bfb335e.slice - libcontainer container kubepods-besteffort-podbd7f54d6_d680_45c7_b829_05311bfb335e.slice. Sep 4 00:06:43.423781 kubelet[2802]: I0904 00:06:43.422873 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6725aa5b-2060-4c41-8f5a-a3dc25045bde-config-volume\") pod \"coredns-668d6bf9bc-g5hqx\" (UID: \"6725aa5b-2060-4c41-8f5a-a3dc25045bde\") " pod="kube-system/coredns-668d6bf9bc-g5hqx" Sep 4 00:06:43.423781 kubelet[2802]: I0904 00:06:43.422950 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ps69\" (UniqueName: \"kubernetes.io/projected/10488c8b-ada8-4558-ae47-834cb1933df2-kube-api-access-7ps69\") pod \"goldmane-54d579b49d-hgxtn\" (UID: \"10488c8b-ada8-4558-ae47-834cb1933df2\") " pod="calico-system/goldmane-54d579b49d-hgxtn" Sep 4 00:06:43.423781 kubelet[2802]: I0904 00:06:43.422987 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5008c380-c5ef-41a7-8255-393de025cd0c-whisker-backend-key-pair\") pod \"whisker-5c69b76b79-qlvtl\" (UID: \"5008c380-c5ef-41a7-8255-393de025cd0c\") " pod="calico-system/whisker-5c69b76b79-qlvtl" Sep 4 00:06:43.423781 kubelet[2802]: I0904 00:06:43.423020 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twfkp\" (UniqueName: \"kubernetes.io/projected/2549e602-d3e9-41e0-95b5-f4f119856631-kube-api-access-twfkp\") pod \"calico-apiserver-5dcfb98d55-psg7f\" (UID: \"2549e602-d3e9-41e0-95b5-f4f119856631\") " pod="calico-apiserver/calico-apiserver-5dcfb98d55-psg7f" Sep 4 00:06:43.423781 kubelet[2802]: I0904 00:06:43.423050 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd7f54d6-d680-45c7-b829-05311bfb335e-calico-apiserver-certs\") pod \"calico-apiserver-7b7454849d-pjblp\" (UID: \"bd7f54d6-d680-45c7-b829-05311bfb335e\") " pod="calico-apiserver/calico-apiserver-7b7454849d-pjblp" Sep 4 00:06:43.424813 kubelet[2802]: I0904 00:06:43.423084 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5008c380-c5ef-41a7-8255-393de025cd0c-whisker-ca-bundle\") pod \"whisker-5c69b76b79-qlvtl\" (UID: \"5008c380-c5ef-41a7-8255-393de025cd0c\") " pod="calico-system/whisker-5c69b76b79-qlvtl" Sep 4 00:06:43.424813 kubelet[2802]: I0904 00:06:43.423128 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f5be82fe-fe0e-4fa6-90ff-b3a883fedcc3-calico-apiserver-certs\") pod \"calico-apiserver-7b7454849d-s7mwj\" (UID: \"f5be82fe-fe0e-4fa6-90ff-b3a883fedcc3\") " pod="calico-apiserver/calico-apiserver-7b7454849d-s7mwj" Sep 4 00:06:43.424813 kubelet[2802]: I0904 00:06:43.423193 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10488c8b-ada8-4558-ae47-834cb1933df2-config\") pod \"goldmane-54d579b49d-hgxtn\" (UID: \"10488c8b-ada8-4558-ae47-834cb1933df2\") " pod="calico-system/goldmane-54d579b49d-hgxtn" Sep 4 00:06:43.424813 kubelet[2802]: I0904 00:06:43.423229 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qx4k\" (UniqueName: \"kubernetes.io/projected/a3c35717-be2b-4037-83b1-22b7815e168b-kube-api-access-6qx4k\") pod \"calico-kube-controllers-67868797b5-gfkg4\" (UID: \"a3c35717-be2b-4037-83b1-22b7815e168b\") " pod="calico-system/calico-kube-controllers-67868797b5-gfkg4" Sep 4 00:06:43.424813 kubelet[2802]: I0904 00:06:43.423282 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qsv7\" (UniqueName: \"kubernetes.io/projected/5008c380-c5ef-41a7-8255-393de025cd0c-kube-api-access-5qsv7\") pod \"whisker-5c69b76b79-qlvtl\" (UID: \"5008c380-c5ef-41a7-8255-393de025cd0c\") " pod="calico-system/whisker-5c69b76b79-qlvtl" Sep 4 00:06:43.425125 kubelet[2802]: I0904 00:06:43.423320 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2bqf\" (UniqueName: \"kubernetes.io/projected/6725aa5b-2060-4c41-8f5a-a3dc25045bde-kube-api-access-d2bqf\") pod \"coredns-668d6bf9bc-g5hqx\" (UID: \"6725aa5b-2060-4c41-8f5a-a3dc25045bde\") " pod="kube-system/coredns-668d6bf9bc-g5hqx" Sep 4 00:06:43.425125 kubelet[2802]: I0904 00:06:43.423349 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/10488c8b-ada8-4558-ae47-834cb1933df2-goldmane-key-pair\") pod \"goldmane-54d579b49d-hgxtn\" (UID: \"10488c8b-ada8-4558-ae47-834cb1933df2\") " pod="calico-system/goldmane-54d579b49d-hgxtn" Sep 4 00:06:43.425125 kubelet[2802]: I0904 00:06:43.423395 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkmmd\" (UniqueName: \"kubernetes.io/projected/bd7f54d6-d680-45c7-b829-05311bfb335e-kube-api-access-gkmmd\") pod \"calico-apiserver-7b7454849d-pjblp\" (UID: \"bd7f54d6-d680-45c7-b829-05311bfb335e\") " pod="calico-apiserver/calico-apiserver-7b7454849d-pjblp" Sep 4 00:06:43.425125 kubelet[2802]: I0904 00:06:43.423470 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m59rh\" (UniqueName: \"kubernetes.io/projected/f5be82fe-fe0e-4fa6-90ff-b3a883fedcc3-kube-api-access-m59rh\") pod \"calico-apiserver-7b7454849d-s7mwj\" (UID: \"f5be82fe-fe0e-4fa6-90ff-b3a883fedcc3\") " pod="calico-apiserver/calico-apiserver-7b7454849d-s7mwj" Sep 4 00:06:43.425489 kubelet[2802]: I0904 00:06:43.423510 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10488c8b-ada8-4558-ae47-834cb1933df2-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-hgxtn\" (UID: \"10488c8b-ada8-4558-ae47-834cb1933df2\") " pod="calico-system/goldmane-54d579b49d-hgxtn" Sep 4 00:06:43.426377 kubelet[2802]: I0904 00:06:43.426335 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2549e602-d3e9-41e0-95b5-f4f119856631-calico-apiserver-certs\") pod \"calico-apiserver-5dcfb98d55-psg7f\" (UID: \"2549e602-d3e9-41e0-95b5-f4f119856631\") " pod="calico-apiserver/calico-apiserver-5dcfb98d55-psg7f" Sep 4 00:06:43.427456 kubelet[2802]: I0904 00:06:43.426608 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3c35717-be2b-4037-83b1-22b7815e168b-tigera-ca-bundle\") pod \"calico-kube-controllers-67868797b5-gfkg4\" (UID: \"a3c35717-be2b-4037-83b1-22b7815e168b\") " pod="calico-system/calico-kube-controllers-67868797b5-gfkg4" Sep 4 00:06:43.448819 systemd[1]: Created slice kubepods-besteffort-poda3c35717_be2b_4037_83b1_22b7815e168b.slice - libcontainer container kubepods-besteffort-poda3c35717_be2b_4037_83b1_22b7815e168b.slice. Sep 4 00:06:43.479699 systemd[1]: Created slice kubepods-besteffort-pod2549e602_d3e9_41e0_95b5_f4f119856631.slice - libcontainer container kubepods-besteffort-pod2549e602_d3e9_41e0_95b5_f4f119856631.slice. Sep 4 00:06:43.500383 systemd[1]: Created slice kubepods-besteffort-pod10488c8b_ada8_4558_ae47_834cb1933df2.slice - libcontainer container kubepods-besteffort-pod10488c8b_ada8_4558_ae47_834cb1933df2.slice. Sep 4 00:06:43.663584 systemd[1]: Created slice kubepods-besteffort-pod5008c380_c5ef_41a7_8255_393de025cd0c.slice - libcontainer container kubepods-besteffort-pod5008c380_c5ef_41a7_8255_393de025cd0c.slice. Sep 4 00:06:43.696753 containerd[1554]: time="2025-09-04T00:06:43.695821754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c69b76b79-qlvtl,Uid:5008c380-c5ef-41a7-8255-393de025cd0c,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:43.760736 containerd[1554]: time="2025-09-04T00:06:43.760662236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7454849d-pjblp,Uid:bd7f54d6-d680-45c7-b829-05311bfb335e,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:06:43.766078 containerd[1554]: time="2025-09-04T00:06:43.766002265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67868797b5-gfkg4,Uid:a3c35717-be2b-4037-83b1-22b7815e168b,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:43.791612 containerd[1554]: time="2025-09-04T00:06:43.791340050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcfb98d55-psg7f,Uid:2549e602-d3e9-41e0-95b5-f4f119856631,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:06:43.879579 containerd[1554]: time="2025-09-04T00:06:43.879330631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hgxtn,Uid:10488c8b-ada8-4558-ae47-834cb1933df2,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:43.991082 containerd[1554]: time="2025-09-04T00:06:43.990899279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7454849d-s7mwj,Uid:f5be82fe-fe0e-4fa6-90ff-b3a883fedcc3,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:06:44.183518 systemd[1]: Created slice kubepods-besteffort-podb4c73824_aa33_4959_8741_865f24cc1aca.slice - libcontainer container kubepods-besteffort-podb4c73824_aa33_4959_8741_865f24cc1aca.slice. Sep 4 00:06:44.197012 containerd[1554]: time="2025-09-04T00:06:44.196488261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-879lm,Uid:b4c73824-aa33-4959-8741-865f24cc1aca,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:44.429122 kubelet[2802]: E0904 00:06:44.429042 2802 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 4 00:06:44.431720 kubelet[2802]: E0904 00:06:44.430232 2802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/11e53487-d6bc-496a-8cbc-9ef48e546387-config-volume podName:11e53487-d6bc-496a-8cbc-9ef48e546387 nodeName:}" failed. No retries permitted until 2025-09-04 00:06:44.929369614 +0000 UTC m=+34.070059875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/11e53487-d6bc-496a-8cbc-9ef48e546387-config-volume") pod "coredns-668d6bf9bc-vxgnf" (UID: "11e53487-d6bc-496a-8cbc-9ef48e546387") : failed to sync configmap cache: timed out waiting for the condition Sep 4 00:06:44.458610 containerd[1554]: time="2025-09-04T00:06:44.458562603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 4 00:06:44.529761 kubelet[2802]: E0904 00:06:44.529621 2802 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 4 00:06:44.529999 kubelet[2802]: E0904 00:06:44.529776 2802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6725aa5b-2060-4c41-8f5a-a3dc25045bde-config-volume podName:6725aa5b-2060-4c41-8f5a-a3dc25045bde nodeName:}" failed. No retries permitted until 2025-09-04 00:06:45.029744884 +0000 UTC m=+34.170435141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6725aa5b-2060-4c41-8f5a-a3dc25045bde-config-volume") pod "coredns-668d6bf9bc-g5hqx" (UID: "6725aa5b-2060-4c41-8f5a-a3dc25045bde") : failed to sync configmap cache: timed out waiting for the condition Sep 4 00:06:44.590921 containerd[1554]: time="2025-09-04T00:06:44.590843892Z" level=error msg="Failed to destroy network for sandbox \"c627874a111f5edd7005ce5dfb607c3254dc51b6c58854cb4b94677b312ae941\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.599619 systemd[1]: run-netns-cni\x2d962495f8\x2df95b\x2d1cbb\x2d40ae\x2d986303588a91.mount: Deactivated successfully. Sep 4 00:06:44.600833 containerd[1554]: time="2025-09-04T00:06:44.600533257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7454849d-s7mwj,Uid:f5be82fe-fe0e-4fa6-90ff-b3a883fedcc3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c627874a111f5edd7005ce5dfb607c3254dc51b6c58854cb4b94677b312ae941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.601020 kubelet[2802]: E0904 00:06:44.600859 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c627874a111f5edd7005ce5dfb607c3254dc51b6c58854cb4b94677b312ae941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.601020 kubelet[2802]: E0904 00:06:44.600980 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c627874a111f5edd7005ce5dfb607c3254dc51b6c58854cb4b94677b312ae941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b7454849d-s7mwj" Sep 4 00:06:44.601155 kubelet[2802]: E0904 00:06:44.601015 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c627874a111f5edd7005ce5dfb607c3254dc51b6c58854cb4b94677b312ae941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b7454849d-s7mwj" Sep 4 00:06:44.601155 kubelet[2802]: E0904 00:06:44.601094 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b7454849d-s7mwj_calico-apiserver(f5be82fe-fe0e-4fa6-90ff-b3a883fedcc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b7454849d-s7mwj_calico-apiserver(f5be82fe-fe0e-4fa6-90ff-b3a883fedcc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c627874a111f5edd7005ce5dfb607c3254dc51b6c58854cb4b94677b312ae941\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b7454849d-s7mwj" podUID="f5be82fe-fe0e-4fa6-90ff-b3a883fedcc3" Sep 4 00:06:44.629317 containerd[1554]: time="2025-09-04T00:06:44.629049473Z" level=error msg="Failed to destroy network for sandbox \"50d24a0af31c80b7d9d51b6159059f0e113c7610d8e0de066a217db2a22c013d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.637004 systemd[1]: run-netns-cni\x2db1b951d6\x2de238\x2dafb1\x2d632d\x2d1aa818cc1e6e.mount: Deactivated successfully. Sep 4 00:06:44.641075 kubelet[2802]: E0904 00:06:44.638476 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50d24a0af31c80b7d9d51b6159059f0e113c7610d8e0de066a217db2a22c013d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.641075 kubelet[2802]: E0904 00:06:44.638578 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50d24a0af31c80b7d9d51b6159059f0e113c7610d8e0de066a217db2a22c013d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c69b76b79-qlvtl" Sep 4 00:06:44.641075 kubelet[2802]: E0904 00:06:44.638618 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50d24a0af31c80b7d9d51b6159059f0e113c7610d8e0de066a217db2a22c013d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c69b76b79-qlvtl" Sep 4 00:06:44.641298 containerd[1554]: time="2025-09-04T00:06:44.637665322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c69b76b79-qlvtl,Uid:5008c380-c5ef-41a7-8255-393de025cd0c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"50d24a0af31c80b7d9d51b6159059f0e113c7610d8e0de066a217db2a22c013d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.641424 kubelet[2802]: E0904 00:06:44.638701 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c69b76b79-qlvtl_calico-system(5008c380-c5ef-41a7-8255-393de025cd0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c69b76b79-qlvtl_calico-system(5008c380-c5ef-41a7-8255-393de025cd0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50d24a0af31c80b7d9d51b6159059f0e113c7610d8e0de066a217db2a22c013d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c69b76b79-qlvtl" podUID="5008c380-c5ef-41a7-8255-393de025cd0c" Sep 4 00:06:44.663463 containerd[1554]: time="2025-09-04T00:06:44.661089054Z" level=error msg="Failed to destroy network for sandbox \"64173fb4295107cd4dd1d668f182e41e383614077bbb57708dc9dc9cc6c88e76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.667672 containerd[1554]: time="2025-09-04T00:06:44.667524607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcfb98d55-psg7f,Uid:2549e602-d3e9-41e0-95b5-f4f119856631,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64173fb4295107cd4dd1d668f182e41e383614077bbb57708dc9dc9cc6c88e76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.668782 systemd[1]: run-netns-cni\x2d74ac0b90\x2de8e9\x2d4a6c\x2d3b4f\x2d85e091204502.mount: Deactivated successfully. Sep 4 00:06:44.672276 kubelet[2802]: E0904 00:06:44.672055 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64173fb4295107cd4dd1d668f182e41e383614077bbb57708dc9dc9cc6c88e76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.672276 kubelet[2802]: E0904 00:06:44.672144 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64173fb4295107cd4dd1d668f182e41e383614077bbb57708dc9dc9cc6c88e76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dcfb98d55-psg7f" Sep 4 00:06:44.672276 kubelet[2802]: E0904 00:06:44.672183 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64173fb4295107cd4dd1d668f182e41e383614077bbb57708dc9dc9cc6c88e76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dcfb98d55-psg7f" Sep 4 00:06:44.674196 kubelet[2802]: E0904 00:06:44.672271 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dcfb98d55-psg7f_calico-apiserver(2549e602-d3e9-41e0-95b5-f4f119856631)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dcfb98d55-psg7f_calico-apiserver(2549e602-d3e9-41e0-95b5-f4f119856631)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64173fb4295107cd4dd1d668f182e41e383614077bbb57708dc9dc9cc6c88e76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dcfb98d55-psg7f" podUID="2549e602-d3e9-41e0-95b5-f4f119856631" Sep 4 00:06:44.685977 containerd[1554]: time="2025-09-04T00:06:44.685591548Z" level=error msg="Failed to destroy network for sandbox \"160bd8cc276badc1192879f126d611885bc6f42ed5874aafed39a510b0fb3c1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.686122 containerd[1554]: time="2025-09-04T00:06:44.686073086Z" level=error msg="Failed to destroy network for sandbox \"0658466ac698e12566ebed76c6d5df211ccdc5d71537f05f4e7b566802fb28b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.691159 containerd[1554]: time="2025-09-04T00:06:44.690975834Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7454849d-pjblp,Uid:bd7f54d6-d680-45c7-b829-05311bfb335e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0658466ac698e12566ebed76c6d5df211ccdc5d71537f05f4e7b566802fb28b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.691392 kubelet[2802]: E0904 00:06:44.691283 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0658466ac698e12566ebed76c6d5df211ccdc5d71537f05f4e7b566802fb28b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.691392 kubelet[2802]: E0904 00:06:44.691358 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0658466ac698e12566ebed76c6d5df211ccdc5d71537f05f4e7b566802fb28b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b7454849d-pjblp" Sep 4 00:06:44.691392 kubelet[2802]: E0904 00:06:44.691391 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0658466ac698e12566ebed76c6d5df211ccdc5d71537f05f4e7b566802fb28b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b7454849d-pjblp" Sep 4 00:06:44.692230 kubelet[2802]: E0904 00:06:44.691959 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b7454849d-pjblp_calico-apiserver(bd7f54d6-d680-45c7-b829-05311bfb335e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b7454849d-pjblp_calico-apiserver(bd7f54d6-d680-45c7-b829-05311bfb335e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0658466ac698e12566ebed76c6d5df211ccdc5d71537f05f4e7b566802fb28b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b7454849d-pjblp" podUID="bd7f54d6-d680-45c7-b829-05311bfb335e" Sep 4 00:06:44.695024 containerd[1554]: time="2025-09-04T00:06:44.694943579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-879lm,Uid:b4c73824-aa33-4959-8741-865f24cc1aca,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"160bd8cc276badc1192879f126d611885bc6f42ed5874aafed39a510b0fb3c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.695442 kubelet[2802]: E0904 00:06:44.695231 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"160bd8cc276badc1192879f126d611885bc6f42ed5874aafed39a510b0fb3c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.695740 kubelet[2802]: E0904 00:06:44.695298 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"160bd8cc276badc1192879f126d611885bc6f42ed5874aafed39a510b0fb3c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-879lm" Sep 4 00:06:44.695740 kubelet[2802]: E0904 00:06:44.695640 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"160bd8cc276badc1192879f126d611885bc6f42ed5874aafed39a510b0fb3c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-879lm" Sep 4 00:06:44.695740 kubelet[2802]: E0904 00:06:44.695723 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-879lm_calico-system(b4c73824-aa33-4959-8741-865f24cc1aca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-879lm_calico-system(b4c73824-aa33-4959-8741-865f24cc1aca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"160bd8cc276badc1192879f126d611885bc6f42ed5874aafed39a510b0fb3c1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-879lm" podUID="b4c73824-aa33-4959-8741-865f24cc1aca" Sep 4 00:06:44.697509 containerd[1554]: time="2025-09-04T00:06:44.697461119Z" level=error msg="Failed to destroy network for sandbox \"91d6ee13d38449dfcbc5860e0480aa4ca360552182992ff386cc6807d8a30c28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.700189 containerd[1554]: time="2025-09-04T00:06:44.700130371Z" level=error msg="Failed to destroy network for sandbox \"0e7f3a9b6d9f04e46c934d8948dfc2a3c3ddd6a983fb6ce4f0b772b4a92612aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.701689 containerd[1554]: time="2025-09-04T00:06:44.701537666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hgxtn,Uid:10488c8b-ada8-4558-ae47-834cb1933df2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d6ee13d38449dfcbc5860e0480aa4ca360552182992ff386cc6807d8a30c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.702888 kubelet[2802]: E0904 00:06:44.702041 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d6ee13d38449dfcbc5860e0480aa4ca360552182992ff386cc6807d8a30c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.702888 kubelet[2802]: E0904 00:06:44.702490 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d6ee13d38449dfcbc5860e0480aa4ca360552182992ff386cc6807d8a30c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hgxtn" Sep 4 00:06:44.702888 kubelet[2802]: E0904 00:06:44.702546 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d6ee13d38449dfcbc5860e0480aa4ca360552182992ff386cc6807d8a30c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hgxtn" Sep 4 00:06:44.703148 kubelet[2802]: E0904 00:06:44.702651 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-hgxtn_calico-system(10488c8b-ada8-4558-ae47-834cb1933df2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-hgxtn_calico-system(10488c8b-ada8-4558-ae47-834cb1933df2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91d6ee13d38449dfcbc5860e0480aa4ca360552182992ff386cc6807d8a30c28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-hgxtn" podUID="10488c8b-ada8-4558-ae47-834cb1933df2" Sep 4 00:06:44.703965 containerd[1554]: time="2025-09-04T00:06:44.703481492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67868797b5-gfkg4,Uid:a3c35717-be2b-4037-83b1-22b7815e168b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7f3a9b6d9f04e46c934d8948dfc2a3c3ddd6a983fb6ce4f0b772b4a92612aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.704121 kubelet[2802]: E0904 00:06:44.703734 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7f3a9b6d9f04e46c934d8948dfc2a3c3ddd6a983fb6ce4f0b772b4a92612aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:44.704121 kubelet[2802]: E0904 00:06:44.703790 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7f3a9b6d9f04e46c934d8948dfc2a3c3ddd6a983fb6ce4f0b772b4a92612aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67868797b5-gfkg4" Sep 4 00:06:44.704121 kubelet[2802]: E0904 00:06:44.703817 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7f3a9b6d9f04e46c934d8948dfc2a3c3ddd6a983fb6ce4f0b772b4a92612aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67868797b5-gfkg4" Sep 4 00:06:44.704748 kubelet[2802]: E0904 00:06:44.703875 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67868797b5-gfkg4_calico-system(a3c35717-be2b-4037-83b1-22b7815e168b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67868797b5-gfkg4_calico-system(a3c35717-be2b-4037-83b1-22b7815e168b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e7f3a9b6d9f04e46c934d8948dfc2a3c3ddd6a983fb6ce4f0b772b4a92612aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67868797b5-gfkg4" podUID="a3c35717-be2b-4037-83b1-22b7815e168b" Sep 4 00:06:45.094196 containerd[1554]: time="2025-09-04T00:06:45.094116903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vxgnf,Uid:11e53487-d6bc-496a-8cbc-9ef48e546387,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:45.161580 containerd[1554]: time="2025-09-04T00:06:45.161495791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5hqx,Uid:6725aa5b-2060-4c41-8f5a-a3dc25045bde,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:45.208992 systemd[1]: run-netns-cni\x2d03eecff6\x2d0cf8\x2d38b0\x2d1eda\x2d9e031bc78561.mount: Deactivated successfully. Sep 4 00:06:45.209186 systemd[1]: run-netns-cni\x2d64c641c2\x2d85f7\x2d6a30\x2d6035\x2d9327fecbbf81.mount: Deactivated successfully. Sep 4 00:06:45.209349 systemd[1]: run-netns-cni\x2d5aa88220\x2d91b8\x2db860\x2dda55\x2d53051550a775.mount: Deactivated successfully. Sep 4 00:06:45.210544 systemd[1]: run-netns-cni\x2da56e56a6\x2ddc35\x2dcc73\x2d0432\x2dc0ebd1f5f497.mount: Deactivated successfully. Sep 4 00:06:45.232461 containerd[1554]: time="2025-09-04T00:06:45.232366138Z" level=error msg="Failed to destroy network for sandbox \"481fb06cddeaf23688e25d939537871bdeaceb11f4ad67b1b21a1dcbebaa4b97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:45.241596 containerd[1554]: time="2025-09-04T00:06:45.239359288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vxgnf,Uid:11e53487-d6bc-496a-8cbc-9ef48e546387,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"481fb06cddeaf23688e25d939537871bdeaceb11f4ad67b1b21a1dcbebaa4b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:45.241937 kubelet[2802]: E0904 00:06:45.239877 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"481fb06cddeaf23688e25d939537871bdeaceb11f4ad67b1b21a1dcbebaa4b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:45.241937 kubelet[2802]: E0904 00:06:45.239982 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"481fb06cddeaf23688e25d939537871bdeaceb11f4ad67b1b21a1dcbebaa4b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vxgnf" Sep 4 00:06:45.241937 kubelet[2802]: E0904 00:06:45.240020 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"481fb06cddeaf23688e25d939537871bdeaceb11f4ad67b1b21a1dcbebaa4b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vxgnf" Sep 4 00:06:45.242268 kubelet[2802]: E0904 00:06:45.240097 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vxgnf_kube-system(11e53487-d6bc-496a-8cbc-9ef48e546387)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vxgnf_kube-system(11e53487-d6bc-496a-8cbc-9ef48e546387)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"481fb06cddeaf23688e25d939537871bdeaceb11f4ad67b1b21a1dcbebaa4b97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vxgnf" podUID="11e53487-d6bc-496a-8cbc-9ef48e546387" Sep 4 00:06:45.244209 systemd[1]: run-netns-cni\x2d4d33ae53\x2d964c\x2d94a9\x2d6710\x2d24aa58a4b7b6.mount: Deactivated successfully. Sep 4 00:06:45.329934 containerd[1554]: time="2025-09-04T00:06:45.329846260Z" level=error msg="Failed to destroy network for sandbox \"ffe118ec2e4cb1c945d8a678ba87d35d99106aa41799f277a95688863f76db96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:45.333753 containerd[1554]: time="2025-09-04T00:06:45.333664792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5hqx,Uid:6725aa5b-2060-4c41-8f5a-a3dc25045bde,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe118ec2e4cb1c945d8a678ba87d35d99106aa41799f277a95688863f76db96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:45.337133 kubelet[2802]: E0904 00:06:45.335577 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe118ec2e4cb1c945d8a678ba87d35d99106aa41799f277a95688863f76db96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:45.337133 kubelet[2802]: E0904 00:06:45.335739 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe118ec2e4cb1c945d8a678ba87d35d99106aa41799f277a95688863f76db96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g5hqx" Sep 4 00:06:45.337133 kubelet[2802]: E0904 00:06:45.335805 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe118ec2e4cb1c945d8a678ba87d35d99106aa41799f277a95688863f76db96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g5hqx" Sep 4 00:06:45.339410 kubelet[2802]: E0904 00:06:45.338548 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-g5hqx_kube-system(6725aa5b-2060-4c41-8f5a-a3dc25045bde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-g5hqx_kube-system(6725aa5b-2060-4c41-8f5a-a3dc25045bde)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffe118ec2e4cb1c945d8a678ba87d35d99106aa41799f277a95688863f76db96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g5hqx" podUID="6725aa5b-2060-4c41-8f5a-a3dc25045bde" Sep 4 00:06:45.338249 systemd[1]: run-netns-cni\x2dfa728807\x2d04a5\x2def57\x2d3fba\x2d9b25215f2ebb.mount: Deactivated successfully. Sep 4 00:06:51.346713 kubelet[2802]: I0904 00:06:51.346372 2802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:06:52.298285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3094375244.mount: Deactivated successfully. Sep 4 00:06:52.337687 containerd[1554]: time="2025-09-04T00:06:52.337599431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:52.339788 containerd[1554]: time="2025-09-04T00:06:52.339414729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 4 00:06:52.341468 containerd[1554]: time="2025-09-04T00:06:52.341358340Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:52.346400 containerd[1554]: time="2025-09-04T00:06:52.345141487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:52.346400 containerd[1554]: time="2025-09-04T00:06:52.346180960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.887349619s" Sep 4 00:06:52.346400 containerd[1554]: time="2025-09-04T00:06:52.346237555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 4 00:06:52.380076 containerd[1554]: time="2025-09-04T00:06:52.380005621Z" level=info msg="CreateContainer within sandbox \"26f2636fdc9397ea4a214b0e3e90718ebd0774cd8278545515d493c6b703c43c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 00:06:52.401472 containerd[1554]: time="2025-09-04T00:06:52.399669914Z" level=info msg="Container 14cabbca30963e1b99c50229390974477bd54d1680806762f13290943b02dba2: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:52.419761 containerd[1554]: time="2025-09-04T00:06:52.419522481Z" level=info msg="CreateContainer within sandbox \"26f2636fdc9397ea4a214b0e3e90718ebd0774cd8278545515d493c6b703c43c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"14cabbca30963e1b99c50229390974477bd54d1680806762f13290943b02dba2\"" Sep 4 00:06:52.421219 containerd[1554]: time="2025-09-04T00:06:52.421053145Z" level=info msg="StartContainer for \"14cabbca30963e1b99c50229390974477bd54d1680806762f13290943b02dba2\"" Sep 4 00:06:52.426027 containerd[1554]: time="2025-09-04T00:06:52.425637089Z" level=info msg="connecting to shim 14cabbca30963e1b99c50229390974477bd54d1680806762f13290943b02dba2" address="unix:///run/containerd/s/3980a1736c4a62287dfdd0bf13cf50ce886f483d43b01506d2fbb3975dcaf009" protocol=ttrpc version=3 Sep 4 00:06:52.476898 systemd[1]: Started cri-containerd-14cabbca30963e1b99c50229390974477bd54d1680806762f13290943b02dba2.scope - libcontainer container 14cabbca30963e1b99c50229390974477bd54d1680806762f13290943b02dba2. Sep 4 00:06:52.584907 containerd[1554]: time="2025-09-04T00:06:52.584052846Z" level=info msg="StartContainer for \"14cabbca30963e1b99c50229390974477bd54d1680806762f13290943b02dba2\" returns successfully" Sep 4 00:06:52.738117 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 00:06:52.738407 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 00:06:53.040577 kubelet[2802]: I0904 00:06:53.038069 2802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5008c380-c5ef-41a7-8255-393de025cd0c-whisker-backend-key-pair\") pod \"5008c380-c5ef-41a7-8255-393de025cd0c\" (UID: \"5008c380-c5ef-41a7-8255-393de025cd0c\") " Sep 4 00:06:53.040577 kubelet[2802]: I0904 00:06:53.038219 2802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qsv7\" (UniqueName: \"kubernetes.io/projected/5008c380-c5ef-41a7-8255-393de025cd0c-kube-api-access-5qsv7\") pod \"5008c380-c5ef-41a7-8255-393de025cd0c\" (UID: \"5008c380-c5ef-41a7-8255-393de025cd0c\") " Sep 4 00:06:53.040577 kubelet[2802]: I0904 00:06:53.038337 2802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5008c380-c5ef-41a7-8255-393de025cd0c-whisker-ca-bundle\") pod \"5008c380-c5ef-41a7-8255-393de025cd0c\" (UID: \"5008c380-c5ef-41a7-8255-393de025cd0c\") " Sep 4 00:06:53.045866 kubelet[2802]: I0904 00:06:53.045797 2802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5008c380-c5ef-41a7-8255-393de025cd0c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5008c380-c5ef-41a7-8255-393de025cd0c" (UID: "5008c380-c5ef-41a7-8255-393de025cd0c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 00:06:53.058005 kubelet[2802]: I0904 00:06:53.057781 2802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5008c380-c5ef-41a7-8255-393de025cd0c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5008c380-c5ef-41a7-8255-393de025cd0c" (UID: "5008c380-c5ef-41a7-8255-393de025cd0c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 00:06:53.058717 kubelet[2802]: I0904 00:06:53.058521 2802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5008c380-c5ef-41a7-8255-393de025cd0c-kube-api-access-5qsv7" (OuterVolumeSpecName: "kube-api-access-5qsv7") pod "5008c380-c5ef-41a7-8255-393de025cd0c" (UID: "5008c380-c5ef-41a7-8255-393de025cd0c"). InnerVolumeSpecName "kube-api-access-5qsv7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:06:53.140492 kubelet[2802]: I0904 00:06:53.140397 2802 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5008c380-c5ef-41a7-8255-393de025cd0c-whisker-ca-bundle\") on node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" DevicePath \"\"" Sep 4 00:06:53.140492 kubelet[2802]: I0904 00:06:53.140494 2802 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5008c380-c5ef-41a7-8255-393de025cd0c-whisker-backend-key-pair\") on node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" DevicePath \"\"" Sep 4 00:06:53.140959 kubelet[2802]: I0904 00:06:53.140517 2802 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5qsv7\" (UniqueName: \"kubernetes.io/projected/5008c380-c5ef-41a7-8255-393de025cd0c-kube-api-access-5qsv7\") on node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" DevicePath \"\"" Sep 4 00:06:53.186076 systemd[1]: Removed slice kubepods-besteffort-pod5008c380_c5ef_41a7_8255_393de025cd0c.slice - libcontainer container kubepods-besteffort-pod5008c380_c5ef_41a7_8255_393de025cd0c.slice. Sep 4 00:06:53.298157 systemd[1]: var-lib-kubelet-pods-5008c380\x2dc5ef\x2d41a7\x2d8255\x2d393de025cd0c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5qsv7.mount: Deactivated successfully. Sep 4 00:06:53.298384 systemd[1]: var-lib-kubelet-pods-5008c380\x2dc5ef\x2d41a7\x2d8255\x2d393de025cd0c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 4 00:06:53.576779 kubelet[2802]: I0904 00:06:53.575587 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2cjq7" podStartSLOduration=2.38885549 podStartE2EDuration="21.575546284s" podCreationTimestamp="2025-09-04 00:06:32 +0000 UTC" firstStartedPulling="2025-09-04 00:06:33.161260381 +0000 UTC m=+22.301950631" lastFinishedPulling="2025-09-04 00:06:52.347951172 +0000 UTC m=+41.488641425" observedRunningTime="2025-09-04 00:06:53.569690396 +0000 UTC m=+42.710380656" watchObservedRunningTime="2025-09-04 00:06:53.575546284 +0000 UTC m=+42.716236535" Sep 4 00:06:53.690512 systemd[1]: Created slice kubepods-besteffort-podaa598e70_169c_4661_bc00_3b458bbaceaf.slice - libcontainer container kubepods-besteffort-podaa598e70_169c_4661_bc00_3b458bbaceaf.slice. Sep 4 00:06:53.746213 kubelet[2802]: I0904 00:06:53.745942 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz9kt\" (UniqueName: \"kubernetes.io/projected/aa598e70-169c-4661-bc00-3b458bbaceaf-kube-api-access-pz9kt\") pod \"whisker-7b7f8855f8-cznkb\" (UID: \"aa598e70-169c-4661-bc00-3b458bbaceaf\") " pod="calico-system/whisker-7b7f8855f8-cznkb" Sep 4 00:06:53.746213 kubelet[2802]: I0904 00:06:53.746045 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa598e70-169c-4661-bc00-3b458bbaceaf-whisker-ca-bundle\") pod \"whisker-7b7f8855f8-cznkb\" (UID: \"aa598e70-169c-4661-bc00-3b458bbaceaf\") " pod="calico-system/whisker-7b7f8855f8-cznkb" Sep 4 00:06:53.746213 kubelet[2802]: I0904 00:06:53.746103 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa598e70-169c-4661-bc00-3b458bbaceaf-whisker-backend-key-pair\") pod \"whisker-7b7f8855f8-cznkb\" (UID: \"aa598e70-169c-4661-bc00-3b458bbaceaf\") " pod="calico-system/whisker-7b7f8855f8-cznkb" Sep 4 00:06:53.788464 containerd[1554]: time="2025-09-04T00:06:53.788278868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14cabbca30963e1b99c50229390974477bd54d1680806762f13290943b02dba2\" id:\"24568d28901d33184bef6ad5df1c402d07e62c7177d63aa9c958261d56c71cf9\" pid:3867 exit_status:1 exited_at:{seconds:1756944413 nanos:787699706}" Sep 4 00:06:53.997946 containerd[1554]: time="2025-09-04T00:06:53.997729850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b7f8855f8-cznkb,Uid:aa598e70-169c-4661-bc00-3b458bbaceaf,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:54.204133 systemd-networkd[1457]: calid8183df0c96: Link UP Sep 4 00:06:54.205830 systemd-networkd[1457]: calid8183df0c96: Gained carrier Sep 4 00:06:54.237173 containerd[1554]: 2025-09-04 00:06:54.053 [INFO][3882] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:06:54.237173 containerd[1554]: 2025-09-04 00:06:54.073 [INFO][3882] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-whisker--7b7f8855f8--cznkb-eth0 whisker-7b7f8855f8- calico-system aa598e70-169c-4661-bc00-3b458bbaceaf 938 0 2025-09-04 00:06:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b7f8855f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc whisker-7b7f8855f8-cznkb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid8183df0c96 [] [] }} ContainerID="556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" Namespace="calico-system" Pod="whisker-7b7f8855f8-cznkb" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-whisker--7b7f8855f8--cznkb-" Sep 4 00:06:54.237173 containerd[1554]: 2025-09-04 00:06:54.074 [INFO][3882] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" Namespace="calico-system" Pod="whisker-7b7f8855f8-cznkb" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-whisker--7b7f8855f8--cznkb-eth0" Sep 4 00:06:54.237173 containerd[1554]: 2025-09-04 00:06:54.124 [INFO][3894] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" HandleID="k8s-pod-network.556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-whisker--7b7f8855f8--cznkb-eth0" Sep 4 00:06:54.238396 containerd[1554]: 2025-09-04 00:06:54.124 [INFO][3894] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" HandleID="k8s-pod-network.556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-whisker--7b7f8855f8--cznkb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", "pod":"whisker-7b7f8855f8-cznkb", "timestamp":"2025-09-04 00:06:54.124141314 +0000 UTC"}, Hostname:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:54.238396 containerd[1554]: 2025-09-04 00:06:54.124 [INFO][3894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:54.238396 containerd[1554]: 2025-09-04 00:06:54.124 [INFO][3894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:54.238396 containerd[1554]: 2025-09-04 00:06:54.124 [INFO][3894] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' Sep 4 00:06:54.238396 containerd[1554]: 2025-09-04 00:06:54.136 [INFO][3894] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:54.238396 containerd[1554]: 2025-09-04 00:06:54.144 [INFO][3894] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:54.238396 containerd[1554]: 2025-09-04 00:06:54.152 [INFO][3894] ipam/ipam.go 511: Trying affinity for 192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:54.238396 containerd[1554]: 2025-09-04 00:06:54.156 [INFO][3894] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:54.239072 containerd[1554]: 2025-09-04 00:06:54.160 [INFO][3894] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:54.239072 containerd[1554]: 2025-09-04 00:06:54.162 [INFO][3894] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:54.239072 containerd[1554]: 2025-09-04 00:06:54.166 [INFO][3894] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db Sep 4 00:06:54.239072 containerd[1554]: 2025-09-04 00:06:54.173 [INFO][3894] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:54.239072 containerd[1554]: 2025-09-04 00:06:54.182 [INFO][3894] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.65/26] block=192.168.33.64/26 handle="k8s-pod-network.556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:54.239072 containerd[1554]: 2025-09-04 00:06:54.183 [INFO][3894] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.65/26] handle="k8s-pod-network.556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:54.239072 containerd[1554]: 2025-09-04 00:06:54.183 [INFO][3894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:54.239072 containerd[1554]: 2025-09-04 00:06:54.183 [INFO][3894] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.65/26] IPv6=[] ContainerID="556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" HandleID="k8s-pod-network.556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-whisker--7b7f8855f8--cznkb-eth0" Sep 4 00:06:54.239870 containerd[1554]: 2025-09-04 00:06:54.187 [INFO][3882] cni-plugin/k8s.go 418: Populated endpoint ContainerID="556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" Namespace="calico-system" Pod="whisker-7b7f8855f8-cznkb" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-whisker--7b7f8855f8--cznkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-whisker--7b7f8855f8--cznkb-eth0", GenerateName:"whisker-7b7f8855f8-", Namespace:"calico-system", SelfLink:"", UID:"aa598e70-169c-4661-bc00-3b458bbaceaf", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b7f8855f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"", Pod:"whisker-7b7f8855f8-cznkb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.33.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid8183df0c96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:54.240115 containerd[1554]: 2025-09-04 00:06:54.188 [INFO][3882] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.65/32] ContainerID="556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" Namespace="calico-system" Pod="whisker-7b7f8855f8-cznkb" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-whisker--7b7f8855f8--cznkb-eth0" Sep 4 00:06:54.240115 containerd[1554]: 2025-09-04 00:06:54.188 [INFO][3882] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8183df0c96 ContainerID="556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" Namespace="calico-system" Pod="whisker-7b7f8855f8-cznkb" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-whisker--7b7f8855f8--cznkb-eth0" Sep 4 00:06:54.240115 containerd[1554]: 2025-09-04 00:06:54.205 [INFO][3882] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" Namespace="calico-system" Pod="whisker-7b7f8855f8-cznkb" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-whisker--7b7f8855f8--cznkb-eth0" Sep 4 00:06:54.240351 containerd[1554]: 2025-09-04 00:06:54.207 [INFO][3882] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" Namespace="calico-system" Pod="whisker-7b7f8855f8-cznkb" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-whisker--7b7f8855f8--cznkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-whisker--7b7f8855f8--cznkb-eth0", GenerateName:"whisker-7b7f8855f8-", Namespace:"calico-system", SelfLink:"", UID:"aa598e70-169c-4661-bc00-3b458bbaceaf", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b7f8855f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db", Pod:"whisker-7b7f8855f8-cznkb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.33.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid8183df0c96", MAC:"02:e1:ae:60:ab:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:54.240543 containerd[1554]: 2025-09-04 00:06:54.231 [INFO][3882] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" Namespace="calico-system" Pod="whisker-7b7f8855f8-cznkb" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-whisker--7b7f8855f8--cznkb-eth0" Sep 4 00:06:54.288542 containerd[1554]: time="2025-09-04T00:06:54.287020295Z" level=info msg="connecting to shim 556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db" address="unix:///run/containerd/s/ae5585e39de07dec04d79d25f3ab5ca474b1a486603b4f451340c1cc182df1ab" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:54.338878 systemd[1]: Started cri-containerd-556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db.scope - libcontainer container 556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db. Sep 4 00:06:54.557108 containerd[1554]: time="2025-09-04T00:06:54.556225122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b7f8855f8-cznkb,Uid:aa598e70-169c-4661-bc00-3b458bbaceaf,Namespace:calico-system,Attempt:0,} returns sandbox id \"556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db\"" Sep 4 00:06:54.561053 containerd[1554]: time="2025-09-04T00:06:54.560998896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 4 00:06:55.128360 containerd[1554]: time="2025-09-04T00:06:55.128036962Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14cabbca30963e1b99c50229390974477bd54d1680806762f13290943b02dba2\" id:\"8df9b76956ab5bfa0df40f106513a0409a19da9c5136876424f6e35362cafa87\" pid:4040 exit_status:1 exited_at:{seconds:1756944415 nanos:127219233}" Sep 4 00:06:55.179218 kubelet[2802]: I0904 00:06:55.178691 2802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5008c380-c5ef-41a7-8255-393de025cd0c" path="/var/lib/kubelet/pods/5008c380-c5ef-41a7-8255-393de025cd0c/volumes" Sep 4 00:06:55.597563 systemd-networkd[1457]: calid8183df0c96: Gained IPv6LL Sep 4 00:06:55.802744 containerd[1554]: time="2025-09-04T00:06:55.801720092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:55.806403 containerd[1554]: time="2025-09-04T00:06:55.806347378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 4 00:06:55.807545 containerd[1554]: time="2025-09-04T00:06:55.807501688Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:55.813813 containerd[1554]: time="2025-09-04T00:06:55.813752621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:55.817378 containerd[1554]: time="2025-09-04T00:06:55.817166830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.254639724s" Sep 4 00:06:55.817378 containerd[1554]: time="2025-09-04T00:06:55.817228478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 4 00:06:55.827500 containerd[1554]: time="2025-09-04T00:06:55.825118561Z" level=info msg="CreateContainer within sandbox \"556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 4 00:06:55.841169 containerd[1554]: time="2025-09-04T00:06:55.841094801Z" level=info msg="Container 89aaa9f1fdf92f5f058534f66ce370a3f7e69f33303d221a34ee70f816ac63c7: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:55.860129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4280114883.mount: Deactivated successfully. Sep 4 00:06:55.875816 containerd[1554]: time="2025-09-04T00:06:55.875723655Z" level=info msg="CreateContainer within sandbox \"556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"89aaa9f1fdf92f5f058534f66ce370a3f7e69f33303d221a34ee70f816ac63c7\"" Sep 4 00:06:55.877866 containerd[1554]: time="2025-09-04T00:06:55.877803547Z" level=info msg="StartContainer for \"89aaa9f1fdf92f5f058534f66ce370a3f7e69f33303d221a34ee70f816ac63c7\"" Sep 4 00:06:55.884835 containerd[1554]: time="2025-09-04T00:06:55.884724298Z" level=info msg="connecting to shim 89aaa9f1fdf92f5f058534f66ce370a3f7e69f33303d221a34ee70f816ac63c7" address="unix:///run/containerd/s/ae5585e39de07dec04d79d25f3ab5ca474b1a486603b4f451340c1cc182df1ab" protocol=ttrpc version=3 Sep 4 00:06:55.917117 systemd-networkd[1457]: vxlan.calico: Link UP Sep 4 00:06:55.917145 systemd-networkd[1457]: vxlan.calico: Gained carrier Sep 4 00:06:55.970698 systemd[1]: Started cri-containerd-89aaa9f1fdf92f5f058534f66ce370a3f7e69f33303d221a34ee70f816ac63c7.scope - libcontainer container 89aaa9f1fdf92f5f058534f66ce370a3f7e69f33303d221a34ee70f816ac63c7. Sep 4 00:06:56.131917 containerd[1554]: time="2025-09-04T00:06:56.131281974Z" level=info msg="StartContainer for \"89aaa9f1fdf92f5f058534f66ce370a3f7e69f33303d221a34ee70f816ac63c7\" returns successfully" Sep 4 00:06:56.138853 containerd[1554]: time="2025-09-04T00:06:56.138783704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 4 00:06:56.168912 containerd[1554]: time="2025-09-04T00:06:56.168833474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5hqx,Uid:6725aa5b-2060-4c41-8f5a-a3dc25045bde,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:56.436252 systemd-networkd[1457]: cali068b3f3dfe4: Link UP Sep 4 00:06:56.437142 systemd-networkd[1457]: cali068b3f3dfe4: Gained carrier Sep 4 00:06:56.473967 containerd[1554]: 2025-09-04 00:06:56.288 [INFO][4174] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--g5hqx-eth0 coredns-668d6bf9bc- kube-system 6725aa5b-2060-4c41-8f5a-a3dc25045bde 853 0 2025-09-04 00:06:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc coredns-668d6bf9bc-g5hqx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali068b3f3dfe4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5hqx" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--g5hqx-" Sep 4 00:06:56.473967 containerd[1554]: 2025-09-04 00:06:56.290 [INFO][4174] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5hqx" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--g5hqx-eth0" Sep 4 00:06:56.473967 containerd[1554]: 2025-09-04 00:06:56.352 [INFO][4189] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" HandleID="k8s-pod-network.d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--g5hqx-eth0" Sep 4 00:06:56.474562 containerd[1554]: 2025-09-04 00:06:56.353 [INFO][4189] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" HandleID="k8s-pod-network.d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--g5hqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f5c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", "pod":"coredns-668d6bf9bc-g5hqx", "timestamp":"2025-09-04 00:06:56.352749363 +0000 UTC"}, Hostname:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:56.474562 containerd[1554]: 2025-09-04 00:06:56.353 [INFO][4189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:56.474562 containerd[1554]: 2025-09-04 00:06:56.353 [INFO][4189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:56.474562 containerd[1554]: 2025-09-04 00:06:56.353 [INFO][4189] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' Sep 4 00:06:56.474562 containerd[1554]: 2025-09-04 00:06:56.369 [INFO][4189] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:56.474562 containerd[1554]: 2025-09-04 00:06:56.379 [INFO][4189] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:56.474562 containerd[1554]: 2025-09-04 00:06:56.387 [INFO][4189] ipam/ipam.go 511: Trying affinity for 192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:56.474562 containerd[1554]: 2025-09-04 00:06:56.392 [INFO][4189] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:56.475166 containerd[1554]: 2025-09-04 00:06:56.396 [INFO][4189] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:56.475166 containerd[1554]: 2025-09-04 00:06:56.396 [INFO][4189] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:56.475166 containerd[1554]: 2025-09-04 00:06:56.399 [INFO][4189] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8 Sep 4 00:06:56.475166 containerd[1554]: 2025-09-04 00:06:56.406 [INFO][4189] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:56.475166 containerd[1554]: 2025-09-04 00:06:56.422 [INFO][4189] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.66/26] block=192.168.33.64/26 handle="k8s-pod-network.d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:56.475166 containerd[1554]: 2025-09-04 00:06:56.422 [INFO][4189] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.66/26] handle="k8s-pod-network.d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:56.475166 containerd[1554]: 2025-09-04 00:06:56.422 [INFO][4189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:56.475166 containerd[1554]: 2025-09-04 00:06:56.422 [INFO][4189] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.66/26] IPv6=[] ContainerID="d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" HandleID="k8s-pod-network.d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--g5hqx-eth0" Sep 4 00:06:56.477074 containerd[1554]: 2025-09-04 00:06:56.427 [INFO][4174] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5hqx" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--g5hqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--g5hqx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6725aa5b-2060-4c41-8f5a-a3dc25045bde", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"", Pod:"coredns-668d6bf9bc-g5hqx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali068b3f3dfe4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:56.477074 containerd[1554]: 2025-09-04 00:06:56.427 [INFO][4174] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.66/32] ContainerID="d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5hqx" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--g5hqx-eth0" Sep 4 00:06:56.477074 containerd[1554]: 2025-09-04 00:06:56.428 [INFO][4174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali068b3f3dfe4 ContainerID="d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5hqx" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--g5hqx-eth0" Sep 4 00:06:56.477074 containerd[1554]: 2025-09-04 00:06:56.432 [INFO][4174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5hqx" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--g5hqx-eth0" Sep 4 00:06:56.477074 containerd[1554]: 2025-09-04 00:06:56.433 [INFO][4174] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5hqx" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--g5hqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--g5hqx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6725aa5b-2060-4c41-8f5a-a3dc25045bde", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8", Pod:"coredns-668d6bf9bc-g5hqx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali068b3f3dfe4", MAC:"c6:23:f0:6f:9f:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:56.477074 containerd[1554]: 2025-09-04 00:06:56.457 [INFO][4174] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5hqx" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--g5hqx-eth0" Sep 4 00:06:56.536592 containerd[1554]: time="2025-09-04T00:06:56.536423293Z" level=info msg="connecting to shim d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8" address="unix:///run/containerd/s/9f12c1b88e2f4401cd90524b6e0248c77386de3dc2d48966ad6abebf66c7d880" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:56.615228 systemd[1]: Started cri-containerd-d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8.scope - libcontainer container d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8. Sep 4 00:06:56.745350 containerd[1554]: time="2025-09-04T00:06:56.745139598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5hqx,Uid:6725aa5b-2060-4c41-8f5a-a3dc25045bde,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8\"" Sep 4 00:06:56.756198 containerd[1554]: time="2025-09-04T00:06:56.755820279Z" level=info msg="CreateContainer within sandbox \"d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:06:56.790755 containerd[1554]: time="2025-09-04T00:06:56.789341283Z" level=info msg="Container 3dac055b8a75ad04ed99b528acd57035922ea97b11d0012b2061bdabae9fa8be: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:56.810474 containerd[1554]: time="2025-09-04T00:06:56.810383455Z" level=info msg="CreateContainer within sandbox \"d3a8ba1e8f52c169750f7835bc1019311adf71df42dbbcf628a561cbaa3a4da8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3dac055b8a75ad04ed99b528acd57035922ea97b11d0012b2061bdabae9fa8be\"" Sep 4 00:06:56.814472 containerd[1554]: time="2025-09-04T00:06:56.812955963Z" level=info msg="StartContainer for \"3dac055b8a75ad04ed99b528acd57035922ea97b11d0012b2061bdabae9fa8be\"" Sep 4 00:06:56.816783 containerd[1554]: time="2025-09-04T00:06:56.816731428Z" level=info msg="connecting to shim 3dac055b8a75ad04ed99b528acd57035922ea97b11d0012b2061bdabae9fa8be" address="unix:///run/containerd/s/9f12c1b88e2f4401cd90524b6e0248c77386de3dc2d48966ad6abebf66c7d880" protocol=ttrpc version=3 Sep 4 00:06:56.894087 systemd[1]: Started cri-containerd-3dac055b8a75ad04ed99b528acd57035922ea97b11d0012b2061bdabae9fa8be.scope - libcontainer container 3dac055b8a75ad04ed99b528acd57035922ea97b11d0012b2061bdabae9fa8be. Sep 4 00:06:56.998668 containerd[1554]: time="2025-09-04T00:06:56.998447203Z" level=info msg="StartContainer for \"3dac055b8a75ad04ed99b528acd57035922ea97b11d0012b2061bdabae9fa8be\" returns successfully" Sep 4 00:06:57.178410 containerd[1554]: time="2025-09-04T00:06:57.178331076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7454849d-s7mwj,Uid:f5be82fe-fe0e-4fa6-90ff-b3a883fedcc3,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:06:57.644795 systemd-networkd[1457]: vxlan.calico: Gained IPv6LL Sep 4 00:06:57.694713 kubelet[2802]: I0904 00:06:57.691329 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-g5hqx" podStartSLOduration=42.691295187 podStartE2EDuration="42.691295187s" podCreationTimestamp="2025-09-04 00:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:06:57.639652387 +0000 UTC m=+46.780342646" watchObservedRunningTime="2025-09-04 00:06:57.691295187 +0000 UTC m=+46.831985450" Sep 4 00:06:57.708847 systemd-networkd[1457]: cali068b3f3dfe4: Gained IPv6LL Sep 4 00:06:57.801832 systemd-networkd[1457]: calidb2958a3b64: Link UP Sep 4 00:06:57.809955 systemd-networkd[1457]: calidb2958a3b64: Gained carrier Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.387 [INFO][4312] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--s7mwj-eth0 calico-apiserver-7b7454849d- calico-apiserver f5be82fe-fe0e-4fa6-90ff-b3a883fedcc3 854 0 2025-09-04 00:06:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b7454849d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc calico-apiserver-7b7454849d-s7mwj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidb2958a3b64 [] [] }} ContainerID="4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-s7mwj" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--s7mwj-" Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.388 [INFO][4312] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-s7mwj" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--s7mwj-eth0" Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.525 [INFO][4332] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" HandleID="k8s-pod-network.4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--s7mwj-eth0" Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.525 [INFO][4332] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" HandleID="k8s-pod-network.4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--s7mwj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000388120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", "pod":"calico-apiserver-7b7454849d-s7mwj", "timestamp":"2025-09-04 00:06:57.525361895 +0000 UTC"}, Hostname:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.525 [INFO][4332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.526 [INFO][4332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.526 [INFO][4332] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.570 [INFO][4332] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.631 [INFO][4332] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.664 [INFO][4332] ipam/ipam.go 511: Trying affinity for 192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.670 [INFO][4332] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.680 [INFO][4332] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.680 [INFO][4332] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.694 [INFO][4332] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959 Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.736 [INFO][4332] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.766 [INFO][4332] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.67/26] block=192.168.33.64/26 handle="k8s-pod-network.4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.767 [INFO][4332] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.67/26] handle="k8s-pod-network.4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.768 [INFO][4332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:57.866554 containerd[1554]: 2025-09-04 00:06:57.769 [INFO][4332] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.67/26] IPv6=[] ContainerID="4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" HandleID="k8s-pod-network.4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--s7mwj-eth0" Sep 4 00:06:57.871247 containerd[1554]: 2025-09-04 00:06:57.781 [INFO][4312] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-s7mwj" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--s7mwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--s7mwj-eth0", GenerateName:"calico-apiserver-7b7454849d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5be82fe-fe0e-4fa6-90ff-b3a883fedcc3", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b7454849d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"", Pod:"calico-apiserver-7b7454849d-s7mwj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb2958a3b64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:57.871247 containerd[1554]: 2025-09-04 00:06:57.782 [INFO][4312] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.67/32] ContainerID="4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-s7mwj" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--s7mwj-eth0" Sep 4 00:06:57.871247 containerd[1554]: 2025-09-04 00:06:57.782 [INFO][4312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb2958a3b64 ContainerID="4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-s7mwj" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--s7mwj-eth0" Sep 4 00:06:57.871247 containerd[1554]: 2025-09-04 00:06:57.814 [INFO][4312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-s7mwj" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--s7mwj-eth0" Sep 4 00:06:57.871247 containerd[1554]: 2025-09-04 00:06:57.817 [INFO][4312] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-s7mwj" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--s7mwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--s7mwj-eth0", GenerateName:"calico-apiserver-7b7454849d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5be82fe-fe0e-4fa6-90ff-b3a883fedcc3", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b7454849d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959", Pod:"calico-apiserver-7b7454849d-s7mwj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb2958a3b64", MAC:"da:cc:04:72:ef:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:57.871247 containerd[1554]: 2025-09-04 00:06:57.850 [INFO][4312] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-s7mwj" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--s7mwj-eth0" Sep 4 00:06:57.964516 containerd[1554]: time="2025-09-04T00:06:57.964152984Z" level=info msg="connecting to shim 4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959" address="unix:///run/containerd/s/c2bc055b20776d5ef7ab11308113a328ced0b50b93ce55e30b7b97f15d4a44af" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:58.053145 systemd[1]: Started cri-containerd-4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959.scope - libcontainer container 4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959. Sep 4 00:06:58.173312 containerd[1554]: time="2025-09-04T00:06:58.173204061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67868797b5-gfkg4,Uid:a3c35717-be2b-4037-83b1-22b7815e168b,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:58.177067 containerd[1554]: time="2025-09-04T00:06:58.174266231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-879lm,Uid:b4c73824-aa33-4959-8741-865f24cc1aca,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:58.776713 containerd[1554]: time="2025-09-04T00:06:58.776349176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7454849d-s7mwj,Uid:f5be82fe-fe0e-4fa6-90ff-b3a883fedcc3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959\"" Sep 4 00:06:58.910208 systemd-networkd[1457]: cali4f354f1d94a: Link UP Sep 4 00:06:58.912726 systemd-networkd[1457]: cali4f354f1d94a: Gained carrier Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.478 [INFO][4395] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--kube--controllers--67868797b5--gfkg4-eth0 calico-kube-controllers-67868797b5- calico-system a3c35717-be2b-4037-83b1-22b7815e168b 861 0 2025-09-04 00:06:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67868797b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc calico-kube-controllers-67868797b5-gfkg4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4f354f1d94a [] [] }} ContainerID="ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" Namespace="calico-system" Pod="calico-kube-controllers-67868797b5-gfkg4" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--kube--controllers--67868797b5--gfkg4-" Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.480 [INFO][4395] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" Namespace="calico-system" Pod="calico-kube-controllers-67868797b5-gfkg4" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--kube--controllers--67868797b5--gfkg4-eth0" Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.738 [INFO][4422] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" HandleID="k8s-pod-network.ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--kube--controllers--67868797b5--gfkg4-eth0" Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.738 [INFO][4422] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" HandleID="k8s-pod-network.ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--kube--controllers--67868797b5--gfkg4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001223b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", "pod":"calico-kube-controllers-67868797b5-gfkg4", "timestamp":"2025-09-04 00:06:58.736809953 +0000 UTC"}, Hostname:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.739 [INFO][4422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.744 [INFO][4422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.744 [INFO][4422] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.781 [INFO][4422] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.798 [INFO][4422] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.821 [INFO][4422] ipam/ipam.go 511: Trying affinity for 192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.829 [INFO][4422] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.836 [INFO][4422] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.836 [INFO][4422] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.840 [INFO][4422] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62 Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.858 [INFO][4422] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.880 [INFO][4422] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.68/26] block=192.168.33.64/26 handle="k8s-pod-network.ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.880 [INFO][4422] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.68/26] handle="k8s-pod-network.ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.880 [INFO][4422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:59.008750 containerd[1554]: 2025-09-04 00:06:58.881 [INFO][4422] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.68/26] IPv6=[] ContainerID="ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" HandleID="k8s-pod-network.ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--kube--controllers--67868797b5--gfkg4-eth0" Sep 4 00:06:59.012033 containerd[1554]: 2025-09-04 00:06:58.893 [INFO][4395] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" Namespace="calico-system" Pod="calico-kube-controllers-67868797b5-gfkg4" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--kube--controllers--67868797b5--gfkg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--kube--controllers--67868797b5--gfkg4-eth0", GenerateName:"calico-kube-controllers-67868797b5-", Namespace:"calico-system", SelfLink:"", UID:"a3c35717-be2b-4037-83b1-22b7815e168b", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67868797b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"", Pod:"calico-kube-controllers-67868797b5-gfkg4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4f354f1d94a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:59.012033 containerd[1554]: 2025-09-04 00:06:58.894 [INFO][4395] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.68/32] ContainerID="ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" Namespace="calico-system" Pod="calico-kube-controllers-67868797b5-gfkg4" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--kube--controllers--67868797b5--gfkg4-eth0" Sep 4 00:06:59.012033 containerd[1554]: 2025-09-04 00:06:58.894 [INFO][4395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f354f1d94a ContainerID="ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" Namespace="calico-system" Pod="calico-kube-controllers-67868797b5-gfkg4" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--kube--controllers--67868797b5--gfkg4-eth0" Sep 4 00:06:59.012033 containerd[1554]: 2025-09-04 00:06:58.920 [INFO][4395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" Namespace="calico-system" Pod="calico-kube-controllers-67868797b5-gfkg4" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--kube--controllers--67868797b5--gfkg4-eth0" Sep 4 00:06:59.012033 containerd[1554]: 2025-09-04 00:06:58.923 [INFO][4395] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" Namespace="calico-system" Pod="calico-kube-controllers-67868797b5-gfkg4" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--kube--controllers--67868797b5--gfkg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--kube--controllers--67868797b5--gfkg4-eth0", GenerateName:"calico-kube-controllers-67868797b5-", Namespace:"calico-system", SelfLink:"", UID:"a3c35717-be2b-4037-83b1-22b7815e168b", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67868797b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62", Pod:"calico-kube-controllers-67868797b5-gfkg4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4f354f1d94a", MAC:"ce:7b:53:1d:7f:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:59.012033 containerd[1554]: 2025-09-04 00:06:58.987 [INFO][4395] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" Namespace="calico-system" Pod="calico-kube-controllers-67868797b5-gfkg4" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--kube--controllers--67868797b5--gfkg4-eth0" Sep 4 00:06:59.086195 systemd-networkd[1457]: calidd2312315c6: Link UP Sep 4 00:06:59.108869 systemd-networkd[1457]: calidd2312315c6: Gained carrier Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:58.500 [INFO][4402] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-csi--node--driver--879lm-eth0 csi-node-driver- calico-system b4c73824-aa33-4959-8741-865f24cc1aca 695 0 2025-09-04 00:06:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc csi-node-driver-879lm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidd2312315c6 [] [] }} ContainerID="7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" Namespace="calico-system" Pod="csi-node-driver-879lm" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-csi--node--driver--879lm-" Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:58.500 [INFO][4402] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" Namespace="calico-system" Pod="csi-node-driver-879lm" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-csi--node--driver--879lm-eth0" Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:58.793 [INFO][4427] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" HandleID="k8s-pod-network.7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-csi--node--driver--879lm-eth0" Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:58.797 [INFO][4427] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" HandleID="k8s-pod-network.7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-csi--node--driver--879lm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e9f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", "pod":"csi-node-driver-879lm", "timestamp":"2025-09-04 00:06:58.793407092 +0000 UTC"}, Hostname:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:58.797 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:58.881 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:58.882 [INFO][4427] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:58.912 [INFO][4427] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:58.930 [INFO][4427] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:58.945 [INFO][4427] ipam/ipam.go 511: Trying affinity for 192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:58.951 [INFO][4427] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:58.969 [INFO][4427] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:58.970 [INFO][4427] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:58.984 [INFO][4427] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:59.006 [INFO][4427] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:59.039 [INFO][4427] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.69/26] block=192.168.33.64/26 handle="k8s-pod-network.7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:59.040 [INFO][4427] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.69/26] handle="k8s-pod-network.7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:59.040 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:59.178865 containerd[1554]: 2025-09-04 00:06:59.040 [INFO][4427] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.69/26] IPv6=[] ContainerID="7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" HandleID="k8s-pod-network.7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-csi--node--driver--879lm-eth0" Sep 4 00:06:59.183904 containerd[1554]: 2025-09-04 00:06:59.060 [INFO][4402] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" Namespace="calico-system" Pod="csi-node-driver-879lm" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-csi--node--driver--879lm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-csi--node--driver--879lm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4c73824-aa33-4959-8741-865f24cc1aca", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"", Pod:"csi-node-driver-879lm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidd2312315c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:59.183904 containerd[1554]: 2025-09-04 00:06:59.060 [INFO][4402] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.69/32] ContainerID="7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" Namespace="calico-system" Pod="csi-node-driver-879lm" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-csi--node--driver--879lm-eth0" Sep 4 00:06:59.183904 containerd[1554]: 2025-09-04 00:06:59.060 [INFO][4402] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd2312315c6 ContainerID="7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" Namespace="calico-system" Pod="csi-node-driver-879lm" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-csi--node--driver--879lm-eth0" Sep 4 00:06:59.183904 containerd[1554]: 2025-09-04 00:06:59.117 [INFO][4402] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" Namespace="calico-system" Pod="csi-node-driver-879lm" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-csi--node--driver--879lm-eth0" Sep 4 00:06:59.183904 containerd[1554]: 2025-09-04 00:06:59.139 [INFO][4402] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" Namespace="calico-system" Pod="csi-node-driver-879lm" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-csi--node--driver--879lm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-csi--node--driver--879lm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4c73824-aa33-4959-8741-865f24cc1aca", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab", Pod:"csi-node-driver-879lm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidd2312315c6", MAC:"4e:43:1b:33:87:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:59.183904 containerd[1554]: 2025-09-04 00:06:59.163 [INFO][4402] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" Namespace="calico-system" Pod="csi-node-driver-879lm" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-csi--node--driver--879lm-eth0" Sep 4 00:06:59.185987 containerd[1554]: time="2025-09-04T00:06:59.184945158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hgxtn,Uid:10488c8b-ada8-4558-ae47-834cb1933df2,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:59.195590 containerd[1554]: time="2025-09-04T00:06:59.194390808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7454849d-pjblp,Uid:bd7f54d6-d680-45c7-b829-05311bfb335e,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:06:59.240646 containerd[1554]: time="2025-09-04T00:06:59.237395259Z" level=info msg="connecting to shim ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62" address="unix:///run/containerd/s/053f092462dfe4570bfe961cd8689525103ab87af97a22490ffdba86d9300a15" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:59.446120 containerd[1554]: time="2025-09-04T00:06:59.444111919Z" level=info msg="connecting to shim 7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab" address="unix:///run/containerd/s/c1a46aef9466b04ada5020d397700fcd272a1d82924ddb90f8cbf8a33e4961c5" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:59.503245 systemd[1]: Started cri-containerd-ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62.scope - libcontainer container ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62. Sep 4 00:06:59.629485 systemd-networkd[1457]: calidb2958a3b64: Gained IPv6LL Sep 4 00:06:59.725045 systemd[1]: Started cri-containerd-7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab.scope - libcontainer container 7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab. Sep 4 00:07:00.114383 systemd-networkd[1457]: calibfc3b0d8c0c: Link UP Sep 4 00:07:00.129113 systemd-networkd[1457]: calibfc3b0d8c0c: Gained carrier Sep 4 00:07:00.141812 systemd-networkd[1457]: cali4f354f1d94a: Gained IPv6LL Sep 4 00:07:00.172476 containerd[1554]: time="2025-09-04T00:07:00.172281485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcfb98d55-psg7f,Uid:2549e602-d3e9-41e0-95b5-f4f119856631,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:07:00.176280 containerd[1554]: time="2025-09-04T00:07:00.176203456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vxgnf,Uid:11e53487-d6bc-496a-8cbc-9ef48e546387,Namespace:kube-system,Attempt:0,}" Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:06:59.691 [INFO][4468] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-goldmane--54d579b49d--hgxtn-eth0 goldmane-54d579b49d- calico-system 10488c8b-ada8-4558-ae47-834cb1933df2 859 0 2025-09-04 00:06:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc goldmane-54d579b49d-hgxtn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibfc3b0d8c0c [] [] }} ContainerID="0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" Namespace="calico-system" Pod="goldmane-54d579b49d-hgxtn" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-goldmane--54d579b49d--hgxtn-" Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:06:59.695 [INFO][4468] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" Namespace="calico-system" Pod="goldmane-54d579b49d-hgxtn" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-goldmane--54d579b49d--hgxtn-eth0" Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:06:59.879 [INFO][4557] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" HandleID="k8s-pod-network.0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-goldmane--54d579b49d--hgxtn-eth0" Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:06:59.879 [INFO][4557] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" HandleID="k8s-pod-network.0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-goldmane--54d579b49d--hgxtn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000371f20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", "pod":"goldmane-54d579b49d-hgxtn", "timestamp":"2025-09-04 00:06:59.879029188 +0000 UTC"}, Hostname:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:06:59.880 [INFO][4557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:06:59.880 [INFO][4557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:06:59.880 [INFO][4557] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:06:59.903 [INFO][4557] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:06:59.922 [INFO][4557] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:06:59.947 [INFO][4557] ipam/ipam.go 511: Trying affinity for 192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:06:59.953 [INFO][4557] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:06:59.966 [INFO][4557] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:06:59.969 [INFO][4557] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:06:59.978 [INFO][4557] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:07:00.006 [INFO][4557] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:07:00.036 [INFO][4557] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.70/26] block=192.168.33.64/26 handle="k8s-pod-network.0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:07:00.037 [INFO][4557] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.70/26] handle="k8s-pod-network.0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:07:00.037 [INFO][4557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:07:00.186424 containerd[1554]: 2025-09-04 00:07:00.037 [INFO][4557] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.70/26] IPv6=[] ContainerID="0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" HandleID="k8s-pod-network.0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-goldmane--54d579b49d--hgxtn-eth0" Sep 4 00:07:00.189169 containerd[1554]: 2025-09-04 00:07:00.043 [INFO][4468] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" Namespace="calico-system" Pod="goldmane-54d579b49d-hgxtn" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-goldmane--54d579b49d--hgxtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-goldmane--54d579b49d--hgxtn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"10488c8b-ada8-4558-ae47-834cb1933df2", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"", Pod:"goldmane-54d579b49d-hgxtn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibfc3b0d8c0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:07:00.189169 containerd[1554]: 2025-09-04 00:07:00.053 [INFO][4468] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.70/32] ContainerID="0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" Namespace="calico-system" Pod="goldmane-54d579b49d-hgxtn" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-goldmane--54d579b49d--hgxtn-eth0" Sep 4 00:07:00.189169 containerd[1554]: 2025-09-04 00:07:00.055 [INFO][4468] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfc3b0d8c0c ContainerID="0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" Namespace="calico-system" Pod="goldmane-54d579b49d-hgxtn" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-goldmane--54d579b49d--hgxtn-eth0" Sep 4 00:07:00.189169 containerd[1554]: 2025-09-04 00:07:00.132 [INFO][4468] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" Namespace="calico-system" Pod="goldmane-54d579b49d-hgxtn" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-goldmane--54d579b49d--hgxtn-eth0" Sep 4 00:07:00.189169 containerd[1554]: 2025-09-04 00:07:00.132 [INFO][4468] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" Namespace="calico-system" Pod="goldmane-54d579b49d-hgxtn" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-goldmane--54d579b49d--hgxtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-goldmane--54d579b49d--hgxtn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"10488c8b-ada8-4558-ae47-834cb1933df2", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b", Pod:"goldmane-54d579b49d-hgxtn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibfc3b0d8c0c", MAC:"42:9a:1b:a8:cc:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:07:00.189169 containerd[1554]: 2025-09-04 00:07:00.165 [INFO][4468] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" Namespace="calico-system" Pod="goldmane-54d579b49d-hgxtn" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-goldmane--54d579b49d--hgxtn-eth0" Sep 4 00:07:00.354817 systemd-networkd[1457]: cali70826c71b6e: Link UP Sep 4 00:07:00.356863 systemd-networkd[1457]: cali70826c71b6e: Gained carrier Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:06:59.771 [INFO][4482] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0 calico-apiserver-7b7454849d- calico-apiserver bd7f54d6-d680-45c7-b829-05311bfb335e 856 0 2025-09-04 00:06:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b7454849d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc calico-apiserver-7b7454849d-pjblp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali70826c71b6e [] [] }} ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-pjblp" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-" Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:06:59.772 [INFO][4482] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-pjblp" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:06:59.993 [INFO][4563] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" HandleID="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:06:59.993 [INFO][4563] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" HandleID="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001245d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", "pod":"calico-apiserver-7b7454849d-pjblp", "timestamp":"2025-09-04 00:06:59.993578315 +0000 UTC"}, Hostname:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:06:59.994 [INFO][4563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:07:00.037 [INFO][4563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:07:00.039 [INFO][4563] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:07:00.105 [INFO][4563] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:07:00.142 [INFO][4563] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:07:00.159 [INFO][4563] ipam/ipam.go 511: Trying affinity for 192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:07:00.178 [INFO][4563] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:07:00.193 [INFO][4563] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:07:00.195 [INFO][4563] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:07:00.202 [INFO][4563] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:07:00.246 [INFO][4563] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:07:00.291 [INFO][4563] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.71/26] block=192.168.33.64/26 handle="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:07:00.291 [INFO][4563] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.71/26] handle="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:07:00.291 [INFO][4563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:07:00.431043 containerd[1554]: 2025-09-04 00:07:00.291 [INFO][4563] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.71/26] IPv6=[] ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" HandleID="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:07:00.432422 containerd[1554]: 2025-09-04 00:07:00.305 [INFO][4482] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-pjblp" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0", GenerateName:"calico-apiserver-7b7454849d-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd7f54d6-d680-45c7-b829-05311bfb335e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b7454849d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"", Pod:"calico-apiserver-7b7454849d-pjblp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70826c71b6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:07:00.432422 containerd[1554]: 2025-09-04 00:07:00.306 [INFO][4482] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.71/32] ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-pjblp" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:07:00.432422 containerd[1554]: 2025-09-04 00:07:00.306 [INFO][4482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70826c71b6e ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-pjblp" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:07:00.432422 containerd[1554]: 2025-09-04 00:07:00.366 [INFO][4482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-pjblp" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:07:00.432422 containerd[1554]: 2025-09-04 00:07:00.367 [INFO][4482] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-pjblp" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0", GenerateName:"calico-apiserver-7b7454849d-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd7f54d6-d680-45c7-b829-05311bfb335e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b7454849d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e", Pod:"calico-apiserver-7b7454849d-pjblp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70826c71b6e", MAC:"e6:98:41:f4:b1:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:07:00.432422 containerd[1554]: 2025-09-04 00:07:00.406 [INFO][4482] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Namespace="calico-apiserver" Pod="calico-apiserver-7b7454849d-pjblp" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:07:00.475467 containerd[1554]: time="2025-09-04T00:07:00.475296005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67868797b5-gfkg4,Uid:a3c35717-be2b-4037-83b1-22b7815e168b,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62\"" Sep 4 00:07:00.490745 containerd[1554]: time="2025-09-04T00:07:00.490657671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-879lm,Uid:b4c73824-aa33-4959-8741-865f24cc1aca,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab\"" Sep 4 00:07:00.533282 containerd[1554]: time="2025-09-04T00:07:00.533200538Z" level=info msg="connecting to shim 0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b" address="unix:///run/containerd/s/55b5a7e50cf15a6749928e961ef9c2ac6104e703216918f0ac2f3c8fc46079eb" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:07:00.639998 containerd[1554]: time="2025-09-04T00:07:00.638562358Z" level=info msg="connecting to shim 5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" address="unix:///run/containerd/s/bae41dff553b90ecd360a98aabd99ae8f8fb4971a596f8037f255d164c0f8256" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:07:00.653236 systemd-networkd[1457]: calidd2312315c6: Gained IPv6LL Sep 4 00:07:00.766999 systemd[1]: Started cri-containerd-0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b.scope - libcontainer container 0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b. Sep 4 00:07:00.831919 systemd[1]: Started cri-containerd-5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e.scope - libcontainer container 5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e. Sep 4 00:07:01.086959 systemd-networkd[1457]: cali8f605f7c86e: Link UP Sep 4 00:07:01.093357 systemd-networkd[1457]: cali8f605f7c86e: Gained carrier Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:00.641 [INFO][4583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--vxgnf-eth0 coredns-668d6bf9bc- kube-system 11e53487-d6bc-496a-8cbc-9ef48e546387 852 0 2025-09-04 00:06:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc coredns-668d6bf9bc-vxgnf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8f605f7c86e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-vxgnf" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--vxgnf-" Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:00.642 [INFO][4583] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-vxgnf" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--vxgnf-eth0" Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:00.867 [INFO][4667] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" HandleID="k8s-pod-network.89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--vxgnf-eth0" Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:00.868 [INFO][4667] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" HandleID="k8s-pod-network.89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--vxgnf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039ec30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", "pod":"coredns-668d6bf9bc-vxgnf", "timestamp":"2025-09-04 00:07:00.866755332 +0000 UTC"}, Hostname:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:00.868 [INFO][4667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:00.868 [INFO][4667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:00.869 [INFO][4667] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:00.911 [INFO][4667] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:00.936 [INFO][4667] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:00.957 [INFO][4667] ipam/ipam.go 511: Trying affinity for 192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:00.972 [INFO][4667] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:00.983 [INFO][4667] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:00.983 [INFO][4667] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:00.986 [INFO][4667] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3 Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:01.000 [INFO][4667] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:01.016 [INFO][4667] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.72/26] block=192.168.33.64/26 handle="k8s-pod-network.89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:01.016 [INFO][4667] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.72/26] handle="k8s-pod-network.89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:01.016 [INFO][4667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:07:01.211577 containerd[1554]: 2025-09-04 00:07:01.016 [INFO][4667] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.72/26] IPv6=[] ContainerID="89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" HandleID="k8s-pod-network.89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--vxgnf-eth0" Sep 4 00:07:01.215841 containerd[1554]: 2025-09-04 00:07:01.027 [INFO][4583] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-vxgnf" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--vxgnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--vxgnf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"11e53487-d6bc-496a-8cbc-9ef48e546387", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"", Pod:"coredns-668d6bf9bc-vxgnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f605f7c86e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:07:01.215841 containerd[1554]: 2025-09-04 00:07:01.027 [INFO][4583] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.72/32] ContainerID="89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-vxgnf" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--vxgnf-eth0" Sep 4 00:07:01.215841 containerd[1554]: 2025-09-04 00:07:01.027 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f605f7c86e ContainerID="89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-vxgnf" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--vxgnf-eth0" Sep 4 00:07:01.215841 containerd[1554]: 2025-09-04 00:07:01.130 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-vxgnf" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--vxgnf-eth0" Sep 4 00:07:01.215841 containerd[1554]: 2025-09-04 00:07:01.133 [INFO][4583] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-vxgnf" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--vxgnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--vxgnf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"11e53487-d6bc-496a-8cbc-9ef48e546387", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3", Pod:"coredns-668d6bf9bc-vxgnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f605f7c86e", MAC:"4a:85:aa:bd:a8:23", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:07:01.215841 containerd[1554]: 2025-09-04 00:07:01.205 [INFO][4583] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-vxgnf" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-coredns--668d6bf9bc--vxgnf-eth0" Sep 4 00:07:01.302598 systemd-networkd[1457]: cali9bbca92bb0c: Link UP Sep 4 00:07:01.308060 systemd-networkd[1457]: cali9bbca92bb0c: Gained carrier Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:00.673 [INFO][4589] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--5dcfb98d55--psg7f-eth0 calico-apiserver-5dcfb98d55- calico-apiserver 2549e602-d3e9-41e0-95b5-f4f119856631 858 0 2025-09-04 00:06:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dcfb98d55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc calico-apiserver-5dcfb98d55-psg7f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9bbca92bb0c [] [] }} ContainerID="c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" Namespace="calico-apiserver" Pod="calico-apiserver-5dcfb98d55-psg7f" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--5dcfb98d55--psg7f-" Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:00.673 [INFO][4589] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" Namespace="calico-apiserver" Pod="calico-apiserver-5dcfb98d55-psg7f" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--5dcfb98d55--psg7f-eth0" Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:00.903 [INFO][4681] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" HandleID="k8s-pod-network.c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--5dcfb98d55--psg7f-eth0" Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:00.907 [INFO][4681] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" HandleID="k8s-pod-network.c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--5dcfb98d55--psg7f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036a3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", "pod":"calico-apiserver-5dcfb98d55-psg7f", "timestamp":"2025-09-04 00:07:00.902761857 +0000 UTC"}, Hostname:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:00.910 [INFO][4681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:01.016 [INFO][4681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:01.019 [INFO][4681] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc' Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:01.131 [INFO][4681] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:01.153 [INFO][4681] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:01.189 [INFO][4681] ipam/ipam.go 511: Trying affinity for 192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:01.195 [INFO][4681] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:01.209 [INFO][4681] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:01.209 [INFO][4681] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:01.215 [INFO][4681] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:01.232 [INFO][4681] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:01.277 [INFO][4681] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.73/26] block=192.168.33.64/26 handle="k8s-pod-network.c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:01.277 [INFO][4681] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.73/26] handle="k8s-pod-network.c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" host="ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc" Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:01.278 [INFO][4681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:07:01.402511 containerd[1554]: 2025-09-04 00:07:01.278 [INFO][4681] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.73/26] IPv6=[] ContainerID="c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" HandleID="k8s-pod-network.c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--5dcfb98d55--psg7f-eth0" Sep 4 00:07:01.408079 containerd[1554]: 2025-09-04 00:07:01.285 [INFO][4589] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" Namespace="calico-apiserver" Pod="calico-apiserver-5dcfb98d55-psg7f" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--5dcfb98d55--psg7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--5dcfb98d55--psg7f-eth0", GenerateName:"calico-apiserver-5dcfb98d55-", Namespace:"calico-apiserver", SelfLink:"", UID:"2549e602-d3e9-41e0-95b5-f4f119856631", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcfb98d55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"", Pod:"calico-apiserver-5dcfb98d55-psg7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9bbca92bb0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:07:01.408079 containerd[1554]: 2025-09-04 00:07:01.285 [INFO][4589] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.73/32] ContainerID="c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" Namespace="calico-apiserver" Pod="calico-apiserver-5dcfb98d55-psg7f" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--5dcfb98d55--psg7f-eth0" Sep 4 00:07:01.408079 containerd[1554]: 2025-09-04 00:07:01.285 [INFO][4589] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9bbca92bb0c ContainerID="c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" Namespace="calico-apiserver" Pod="calico-apiserver-5dcfb98d55-psg7f" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--5dcfb98d55--psg7f-eth0" Sep 4 00:07:01.408079 containerd[1554]: 2025-09-04 00:07:01.316 [INFO][4589] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" Namespace="calico-apiserver" Pod="calico-apiserver-5dcfb98d55-psg7f" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--5dcfb98d55--psg7f-eth0" Sep 4 00:07:01.408079 containerd[1554]: 2025-09-04 00:07:01.332 [INFO][4589] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" Namespace="calico-apiserver" Pod="calico-apiserver-5dcfb98d55-psg7f" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--5dcfb98d55--psg7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--5dcfb98d55--psg7f-eth0", GenerateName:"calico-apiserver-5dcfb98d55-", Namespace:"calico-apiserver", SelfLink:"", UID:"2549e602-d3e9-41e0-95b5-f4f119856631", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcfb98d55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc", ContainerID:"c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd", Pod:"calico-apiserver-5dcfb98d55-psg7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9bbca92bb0c", MAC:"6a:f3:1a:6c:ce:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:07:01.408079 containerd[1554]: 2025-09-04 00:07:01.375 [INFO][4589] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" Namespace="calico-apiserver" Pod="calico-apiserver-5dcfb98d55-psg7f" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--5dcfb98d55--psg7f-eth0" Sep 4 00:07:01.445753 containerd[1554]: time="2025-09-04T00:07:01.445673014Z" level=info msg="connecting to shim 89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3" address="unix:///run/containerd/s/00d2e8ec76752a5e7332c3ce7ac78334b8f4d7563441d27f00e45e91f4ef79a6" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:07:01.461290 containerd[1554]: time="2025-09-04T00:07:01.461162945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hgxtn,Uid:10488c8b-ada8-4558-ae47-834cb1933df2,Namespace:calico-system,Attempt:0,} returns sandbox id \"0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b\"" Sep 4 00:07:01.579947 containerd[1554]: time="2025-09-04T00:07:01.579646977Z" level=info msg="connecting to shim c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd" address="unix:///run/containerd/s/8ee2e186b8d1c44bc2c46520f38a5b06e5562cc859d1f2bdeb9d8120f341431a" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:07:01.634818 systemd[1]: Started cri-containerd-89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3.scope - libcontainer container 89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3. Sep 4 00:07:01.741052 systemd-networkd[1457]: calibfc3b0d8c0c: Gained IPv6LL Sep 4 00:07:01.805074 systemd-networkd[1457]: cali70826c71b6e: Gained IPv6LL Sep 4 00:07:01.815665 containerd[1554]: time="2025-09-04T00:07:01.815594532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7454849d-pjblp,Uid:bd7f54d6-d680-45c7-b829-05311bfb335e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\"" Sep 4 00:07:01.826989 systemd[1]: Started cri-containerd-c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd.scope - libcontainer container c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd. Sep 4 00:07:01.944470 containerd[1554]: time="2025-09-04T00:07:01.944150835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vxgnf,Uid:11e53487-d6bc-496a-8cbc-9ef48e546387,Namespace:kube-system,Attempt:0,} returns sandbox id \"89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3\"" Sep 4 00:07:01.958177 containerd[1554]: time="2025-09-04T00:07:01.958113625Z" level=info msg="CreateContainer within sandbox \"89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:07:01.997511 containerd[1554]: time="2025-09-04T00:07:01.997034201Z" level=info msg="Container 34491a58c2f0854cc8806cf5440a64b7044abc5c51b387daaaf100dc9dd6a45d: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:02.023093 containerd[1554]: time="2025-09-04T00:07:02.021946360Z" level=info msg="CreateContainer within sandbox \"89a6bce44d45b25dd5963c1325adaf8074d5b3b0559cb0a4a0238cb6532322d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"34491a58c2f0854cc8806cf5440a64b7044abc5c51b387daaaf100dc9dd6a45d\"" Sep 4 00:07:02.028656 containerd[1554]: time="2025-09-04T00:07:02.028572777Z" level=info msg="StartContainer for \"34491a58c2f0854cc8806cf5440a64b7044abc5c51b387daaaf100dc9dd6a45d\"" Sep 4 00:07:02.034460 containerd[1554]: time="2025-09-04T00:07:02.034157710Z" level=info msg="connecting to shim 34491a58c2f0854cc8806cf5440a64b7044abc5c51b387daaaf100dc9dd6a45d" address="unix:///run/containerd/s/00d2e8ec76752a5e7332c3ce7ac78334b8f4d7563441d27f00e45e91f4ef79a6" protocol=ttrpc version=3 Sep 4 00:07:02.156693 systemd[1]: Started cri-containerd-34491a58c2f0854cc8806cf5440a64b7044abc5c51b387daaaf100dc9dd6a45d.scope - libcontainer container 34491a58c2f0854cc8806cf5440a64b7044abc5c51b387daaaf100dc9dd6a45d. Sep 4 00:07:02.224671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount950884824.mount: Deactivated successfully. Sep 4 00:07:02.268567 containerd[1554]: time="2025-09-04T00:07:02.267518845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:02.273351 containerd[1554]: time="2025-09-04T00:07:02.272203095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 4 00:07:02.274886 containerd[1554]: time="2025-09-04T00:07:02.274829465Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:02.286205 containerd[1554]: time="2025-09-04T00:07:02.285123415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:02.287976 containerd[1554]: time="2025-09-04T00:07:02.287790061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 6.148940671s" Sep 4 00:07:02.288163 containerd[1554]: time="2025-09-04T00:07:02.288007806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 4 00:07:02.300213 containerd[1554]: time="2025-09-04T00:07:02.298270090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 00:07:02.304827 containerd[1554]: time="2025-09-04T00:07:02.304761068Z" level=info msg="CreateContainer within sandbox \"556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 4 00:07:02.334781 containerd[1554]: time="2025-09-04T00:07:02.332327682Z" level=info msg="Container 105d46b119fe9728f745cfce869ad30a7b491251b19df1cb39dc464fc33d2242: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:02.349777 containerd[1554]: time="2025-09-04T00:07:02.348782139Z" level=info msg="StartContainer for \"34491a58c2f0854cc8806cf5440a64b7044abc5c51b387daaaf100dc9dd6a45d\" returns successfully" Sep 4 00:07:02.365874 containerd[1554]: time="2025-09-04T00:07:02.364393357Z" level=info msg="CreateContainer within sandbox \"556efc3ac6d4264d50ef20acc239ba6a7f10392c8ce259dbaf03f0e1f08223db\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"105d46b119fe9728f745cfce869ad30a7b491251b19df1cb39dc464fc33d2242\"" Sep 4 00:07:02.369657 containerd[1554]: time="2025-09-04T00:07:02.369550864Z" level=info msg="StartContainer for \"105d46b119fe9728f745cfce869ad30a7b491251b19df1cb39dc464fc33d2242\"" Sep 4 00:07:02.377050 containerd[1554]: time="2025-09-04T00:07:02.376921927Z" level=info msg="connecting to shim 105d46b119fe9728f745cfce869ad30a7b491251b19df1cb39dc464fc33d2242" address="unix:///run/containerd/s/ae5585e39de07dec04d79d25f3ab5ca474b1a486603b4f451340c1cc182df1ab" protocol=ttrpc version=3 Sep 4 00:07:02.445116 systemd-networkd[1457]: cali8f605f7c86e: Gained IPv6LL Sep 4 00:07:02.537424 systemd[1]: Started cri-containerd-105d46b119fe9728f745cfce869ad30a7b491251b19df1cb39dc464fc33d2242.scope - libcontainer container 105d46b119fe9728f745cfce869ad30a7b491251b19df1cb39dc464fc33d2242. Sep 4 00:07:02.584993 containerd[1554]: time="2025-09-04T00:07:02.584911468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcfb98d55-psg7f,Uid:2549e602-d3e9-41e0-95b5-f4f119856631,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd\"" Sep 4 00:07:02.723198 containerd[1554]: time="2025-09-04T00:07:02.723094214Z" level=info msg="StartContainer for \"105d46b119fe9728f745cfce869ad30a7b491251b19df1cb39dc464fc33d2242\" returns successfully" Sep 4 00:07:02.764827 systemd-networkd[1457]: cali9bbca92bb0c: Gained IPv6LL Sep 4 00:07:02.858042 kubelet[2802]: I0904 00:07:02.857928 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vxgnf" podStartSLOduration=47.857892723 podStartE2EDuration="47.857892723s" podCreationTimestamp="2025-09-04 00:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:07:02.857405208 +0000 UTC m=+51.998095468" watchObservedRunningTime="2025-09-04 00:07:02.857892723 +0000 UTC m=+51.998582981" Sep 4 00:07:02.858952 kubelet[2802]: I0904 00:07:02.858244 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7b7f8855f8-cznkb" podStartSLOduration=2.122169427 podStartE2EDuration="9.858219693s" podCreationTimestamp="2025-09-04 00:06:53 +0000 UTC" firstStartedPulling="2025-09-04 00:06:54.558852761 +0000 UTC m=+43.699543005" lastFinishedPulling="2025-09-04 00:07:02.294903019 +0000 UTC m=+51.435593271" observedRunningTime="2025-09-04 00:07:02.809183731 +0000 UTC m=+51.949873993" watchObservedRunningTime="2025-09-04 00:07:02.858219693 +0000 UTC m=+51.998909954" Sep 4 00:07:05.391758 containerd[1554]: time="2025-09-04T00:07:05.391653774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:05.393924 containerd[1554]: time="2025-09-04T00:07:05.393653194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 4 00:07:05.395753 containerd[1554]: time="2025-09-04T00:07:05.395686002Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:05.400000 containerd[1554]: time="2025-09-04T00:07:05.399918842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:05.401976 containerd[1554]: time="2025-09-04T00:07:05.401531179Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.103189731s" Sep 4 00:07:05.401976 containerd[1554]: time="2025-09-04T00:07:05.401592279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 00:07:05.404915 containerd[1554]: time="2025-09-04T00:07:05.404836717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 4 00:07:05.406847 containerd[1554]: time="2025-09-04T00:07:05.406734250Z" level=info msg="CreateContainer within sandbox \"4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 00:07:05.424913 containerd[1554]: time="2025-09-04T00:07:05.424829919Z" level=info msg="Container 8586a1a3e3b397383bbb22f759b3a07ba0a7ad257c11e9f3c989272657e7a3ab: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:05.444632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount511415796.mount: Deactivated successfully. Sep 4 00:07:05.452937 containerd[1554]: time="2025-09-04T00:07:05.452865090Z" level=info msg="CreateContainer within sandbox \"4fbc85486ade39f03887f4446058a68794b8681837944c80f2413ba30bbd1959\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8586a1a3e3b397383bbb22f759b3a07ba0a7ad257c11e9f3c989272657e7a3ab\"" Sep 4 00:07:05.454122 containerd[1554]: time="2025-09-04T00:07:05.453904070Z" level=info msg="StartContainer for \"8586a1a3e3b397383bbb22f759b3a07ba0a7ad257c11e9f3c989272657e7a3ab\"" Sep 4 00:07:05.457780 containerd[1554]: time="2025-09-04T00:07:05.457688133Z" level=info msg="connecting to shim 8586a1a3e3b397383bbb22f759b3a07ba0a7ad257c11e9f3c989272657e7a3ab" address="unix:///run/containerd/s/c2bc055b20776d5ef7ab11308113a328ced0b50b93ce55e30b7b97f15d4a44af" protocol=ttrpc version=3 Sep 4 00:07:05.466489 ntpd[1518]: Listen normally on 7 vxlan.calico 192.168.33.64:123 Sep 4 00:07:05.468063 ntpd[1518]: 4 Sep 00:07:05 ntpd[1518]: Listen normally on 7 vxlan.calico 192.168.33.64:123 Sep 4 00:07:05.468063 ntpd[1518]: 4 Sep 00:07:05 ntpd[1518]: Listen normally on 8 calid8183df0c96 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 4 00:07:05.468063 ntpd[1518]: 4 Sep 00:07:05 ntpd[1518]: Listen normally on 9 vxlan.calico [fe80::64ec:16ff:fe19:7869%5]:123 Sep 4 00:07:05.468063 ntpd[1518]: 4 Sep 00:07:05 ntpd[1518]: Listen normally on 10 cali068b3f3dfe4 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 00:07:05.468063 ntpd[1518]: 4 Sep 00:07:05 ntpd[1518]: Listen normally on 11 calidb2958a3b64 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 00:07:05.468063 ntpd[1518]: 4 Sep 00:07:05 ntpd[1518]: Listen normally on 12 cali4f354f1d94a [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 00:07:05.468063 ntpd[1518]: 4 Sep 00:07:05 ntpd[1518]: Listen normally on 13 calidd2312315c6 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 00:07:05.468063 ntpd[1518]: 4 Sep 00:07:05 ntpd[1518]: Listen normally on 14 calibfc3b0d8c0c [fe80::ecee:eeff:feee:eeee%12]:123 Sep 4 00:07:05.468063 ntpd[1518]: 4 Sep 00:07:05 ntpd[1518]: Listen normally on 15 cali70826c71b6e [fe80::ecee:eeff:feee:eeee%13]:123 Sep 4 00:07:05.468063 ntpd[1518]: 4 Sep 00:07:05 ntpd[1518]: Listen normally on 16 cali8f605f7c86e [fe80::ecee:eeff:feee:eeee%14]:123 Sep 4 00:07:05.468063 ntpd[1518]: 4 Sep 00:07:05 ntpd[1518]: Listen normally on 17 cali9bbca92bb0c [fe80::ecee:eeff:feee:eeee%15]:123 Sep 4 00:07:05.466648 ntpd[1518]: Listen normally on 8 calid8183df0c96 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 4 00:07:05.466750 ntpd[1518]: Listen normally on 9 vxlan.calico [fe80::64ec:16ff:fe19:7869%5]:123 Sep 4 00:07:05.466819 ntpd[1518]: Listen normally on 10 cali068b3f3dfe4 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 00:07:05.466888 ntpd[1518]: Listen normally on 11 calidb2958a3b64 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 00:07:05.466954 ntpd[1518]: Listen normally on 12 cali4f354f1d94a [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 00:07:05.467020 ntpd[1518]: Listen normally on 13 calidd2312315c6 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 00:07:05.467099 ntpd[1518]: Listen normally on 14 calibfc3b0d8c0c [fe80::ecee:eeff:feee:eeee%12]:123 Sep 4 00:07:05.467163 ntpd[1518]: Listen normally on 15 cali70826c71b6e [fe80::ecee:eeff:feee:eeee%13]:123 Sep 4 00:07:05.467241 ntpd[1518]: Listen normally on 16 cali8f605f7c86e [fe80::ecee:eeff:feee:eeee%14]:123 Sep 4 00:07:05.467307 ntpd[1518]: Listen normally on 17 cali9bbca92bb0c [fe80::ecee:eeff:feee:eeee%15]:123 Sep 4 00:07:05.510026 systemd[1]: Started cri-containerd-8586a1a3e3b397383bbb22f759b3a07ba0a7ad257c11e9f3c989272657e7a3ab.scope - libcontainer container 8586a1a3e3b397383bbb22f759b3a07ba0a7ad257c11e9f3c989272657e7a3ab. Sep 4 00:07:05.605587 containerd[1554]: time="2025-09-04T00:07:05.605510499Z" level=info msg="StartContainer for \"8586a1a3e3b397383bbb22f759b3a07ba0a7ad257c11e9f3c989272657e7a3ab\" returns successfully" Sep 4 00:07:06.788902 containerd[1554]: time="2025-09-04T00:07:06.788519416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:06.790687 containerd[1554]: time="2025-09-04T00:07:06.790174389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 4 00:07:06.792956 containerd[1554]: time="2025-09-04T00:07:06.792766927Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:06.797227 containerd[1554]: time="2025-09-04T00:07:06.797002636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:06.799970 containerd[1554]: time="2025-09-04T00:07:06.799919492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.39502712s" Sep 4 00:07:06.800625 containerd[1554]: time="2025-09-04T00:07:06.800531457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 4 00:07:06.803407 containerd[1554]: time="2025-09-04T00:07:06.803345047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 4 00:07:06.805250 containerd[1554]: time="2025-09-04T00:07:06.805189674Z" level=info msg="CreateContainer within sandbox \"7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 00:07:06.829257 containerd[1554]: time="2025-09-04T00:07:06.826458838Z" level=info msg="Container 4864c7229a1a0a860626bbd14064c95275f2d126fcd9b8e65ee6d122a7555ca0: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:06.863020 containerd[1554]: time="2025-09-04T00:07:06.862596006Z" level=info msg="CreateContainer within sandbox \"7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4864c7229a1a0a860626bbd14064c95275f2d126fcd9b8e65ee6d122a7555ca0\"" Sep 4 00:07:06.868469 containerd[1554]: time="2025-09-04T00:07:06.866652608Z" level=info msg="StartContainer for \"4864c7229a1a0a860626bbd14064c95275f2d126fcd9b8e65ee6d122a7555ca0\"" Sep 4 00:07:06.870835 containerd[1554]: time="2025-09-04T00:07:06.870784810Z" level=info msg="connecting to shim 4864c7229a1a0a860626bbd14064c95275f2d126fcd9b8e65ee6d122a7555ca0" address="unix:///run/containerd/s/c1a46aef9466b04ada5020d397700fcd272a1d82924ddb90f8cbf8a33e4961c5" protocol=ttrpc version=3 Sep 4 00:07:06.940866 systemd[1]: Started cri-containerd-4864c7229a1a0a860626bbd14064c95275f2d126fcd9b8e65ee6d122a7555ca0.scope - libcontainer container 4864c7229a1a0a860626bbd14064c95275f2d126fcd9b8e65ee6d122a7555ca0. Sep 4 00:07:07.145549 containerd[1554]: time="2025-09-04T00:07:07.145360918Z" level=info msg="StartContainer for \"4864c7229a1a0a860626bbd14064c95275f2d126fcd9b8e65ee6d122a7555ca0\" returns successfully" Sep 4 00:07:08.226844 kubelet[2802]: I0904 00:07:08.226738 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b7454849d-s7mwj" podStartSLOduration=35.610220924000004 podStartE2EDuration="42.226700481s" podCreationTimestamp="2025-09-04 00:06:26 +0000 UTC" firstStartedPulling="2025-09-04 00:06:58.787200582 +0000 UTC m=+47.927890836" lastFinishedPulling="2025-09-04 00:07:05.403680128 +0000 UTC m=+54.544370393" observedRunningTime="2025-09-04 00:07:05.847407059 +0000 UTC m=+54.988097320" watchObservedRunningTime="2025-09-04 00:07:08.226700481 +0000 UTC m=+57.367390744" Sep 4 00:07:09.855141 containerd[1554]: time="2025-09-04T00:07:09.855048752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:09.857295 containerd[1554]: time="2025-09-04T00:07:09.856797348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 4 00:07:09.859306 containerd[1554]: time="2025-09-04T00:07:09.859222123Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:09.864462 containerd[1554]: time="2025-09-04T00:07:09.863374127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:09.864462 containerd[1554]: time="2025-09-04T00:07:09.864363042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.060964344s" Sep 4 00:07:09.864825 containerd[1554]: time="2025-09-04T00:07:09.864414689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 4 00:07:09.866776 containerd[1554]: time="2025-09-04T00:07:09.866736480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 4 00:07:09.907596 containerd[1554]: time="2025-09-04T00:07:09.907528175Z" level=info msg="CreateContainer within sandbox \"ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 00:07:09.923522 containerd[1554]: time="2025-09-04T00:07:09.923411831Z" level=info msg="Container 6d87c22de67da92e901fee7f92141a0844b634bc1acae0347818e956f0a07ff3: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:09.949394 containerd[1554]: time="2025-09-04T00:07:09.949304289Z" level=info msg="CreateContainer within sandbox \"ed476d99b70349af7938241c9b9028467f24e3ebde6e6f9f99913c16abed7c62\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6d87c22de67da92e901fee7f92141a0844b634bc1acae0347818e956f0a07ff3\"" Sep 4 00:07:09.951458 containerd[1554]: time="2025-09-04T00:07:09.951302329Z" level=info msg="StartContainer for \"6d87c22de67da92e901fee7f92141a0844b634bc1acae0347818e956f0a07ff3\"" Sep 4 00:07:09.956557 containerd[1554]: time="2025-09-04T00:07:09.956477430Z" level=info msg="connecting to shim 6d87c22de67da92e901fee7f92141a0844b634bc1acae0347818e956f0a07ff3" address="unix:///run/containerd/s/053f092462dfe4570bfe961cd8689525103ab87af97a22490ffdba86d9300a15" protocol=ttrpc version=3 Sep 4 00:07:10.003923 systemd[1]: Started cri-containerd-6d87c22de67da92e901fee7f92141a0844b634bc1acae0347818e956f0a07ff3.scope - libcontainer container 6d87c22de67da92e901fee7f92141a0844b634bc1acae0347818e956f0a07ff3. Sep 4 00:07:10.109042 containerd[1554]: time="2025-09-04T00:07:10.108775933Z" level=info msg="StartContainer for \"6d87c22de67da92e901fee7f92141a0844b634bc1acae0347818e956f0a07ff3\" returns successfully" Sep 4 00:07:10.951801 kubelet[2802]: I0904 00:07:10.951328 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67868797b5-gfkg4" podStartSLOduration=28.582345733 podStartE2EDuration="37.951287391s" podCreationTimestamp="2025-09-04 00:06:33 +0000 UTC" firstStartedPulling="2025-09-04 00:07:00.497096413 +0000 UTC m=+49.637786663" lastFinishedPulling="2025-09-04 00:07:09.866038084 +0000 UTC m=+59.006728321" observedRunningTime="2025-09-04 00:07:10.941342212 +0000 UTC m=+60.082032475" watchObservedRunningTime="2025-09-04 00:07:10.951287391 +0000 UTC m=+60.091977650" Sep 4 00:07:11.151955 containerd[1554]: time="2025-09-04T00:07:11.151886965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d87c22de67da92e901fee7f92141a0844b634bc1acae0347818e956f0a07ff3\" id:\"cc8a94ec74603e6652e5a44d3e245b630e06dac9bc56a3831ecd952b7c7c834a\" pid:5091 exited_at:{seconds:1756944431 nanos:148866933}" Sep 4 00:07:12.705026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount886619481.mount: Deactivated successfully. Sep 4 00:07:14.169299 containerd[1554]: time="2025-09-04T00:07:14.167377382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:14.170480 containerd[1554]: time="2025-09-04T00:07:14.170391198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 4 00:07:14.172190 containerd[1554]: time="2025-09-04T00:07:14.172147490Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:14.181067 containerd[1554]: time="2025-09-04T00:07:14.180778055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:14.184280 containerd[1554]: time="2025-09-04T00:07:14.183733959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.31671195s" Sep 4 00:07:14.184280 containerd[1554]: time="2025-09-04T00:07:14.183800428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 4 00:07:14.187277 containerd[1554]: time="2025-09-04T00:07:14.187199739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 00:07:14.191926 containerd[1554]: time="2025-09-04T00:07:14.191847641Z" level=info msg="CreateContainer within sandbox \"0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 4 00:07:14.215533 containerd[1554]: time="2025-09-04T00:07:14.214337839Z" level=info msg="Container dee49be162d302756d0c1cc7ecc8749cf401108fcabf67d59e78201477c38728: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:14.243318 containerd[1554]: time="2025-09-04T00:07:14.243223415Z" level=info msg="CreateContainer within sandbox \"0cd6d64ba434e6d91561857b4c2cafdb7cda5972c69384f1a8d7a2ef26807b2b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"dee49be162d302756d0c1cc7ecc8749cf401108fcabf67d59e78201477c38728\"" Sep 4 00:07:14.245426 containerd[1554]: time="2025-09-04T00:07:14.245225160Z" level=info msg="StartContainer for \"dee49be162d302756d0c1cc7ecc8749cf401108fcabf67d59e78201477c38728\"" Sep 4 00:07:14.248465 containerd[1554]: time="2025-09-04T00:07:14.248383324Z" level=info msg="connecting to shim dee49be162d302756d0c1cc7ecc8749cf401108fcabf67d59e78201477c38728" address="unix:///run/containerd/s/55b5a7e50cf15a6749928e961ef9c2ac6104e703216918f0ac2f3c8fc46079eb" protocol=ttrpc version=3 Sep 4 00:07:14.335774 systemd[1]: Started cri-containerd-dee49be162d302756d0c1cc7ecc8749cf401108fcabf67d59e78201477c38728.scope - libcontainer container dee49be162d302756d0c1cc7ecc8749cf401108fcabf67d59e78201477c38728. Sep 4 00:07:14.492625 containerd[1554]: time="2025-09-04T00:07:14.492305166Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:14.493600 containerd[1554]: time="2025-09-04T00:07:14.493499424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 4 00:07:14.496458 containerd[1554]: time="2025-09-04T00:07:14.496339989Z" level=info msg="StartContainer for \"dee49be162d302756d0c1cc7ecc8749cf401108fcabf67d59e78201477c38728\" returns successfully" Sep 4 00:07:14.500286 containerd[1554]: time="2025-09-04T00:07:14.500162713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 312.897633ms" Sep 4 00:07:14.500286 containerd[1554]: time="2025-09-04T00:07:14.500239841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 00:07:14.506054 containerd[1554]: time="2025-09-04T00:07:14.505889626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 00:07:14.508932 containerd[1554]: time="2025-09-04T00:07:14.508843515Z" level=info msg="CreateContainer within sandbox \"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 00:07:14.527077 containerd[1554]: time="2025-09-04T00:07:14.526945270Z" level=info msg="Container de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:14.548736 containerd[1554]: time="2025-09-04T00:07:14.548654195Z" level=info msg="CreateContainer within sandbox \"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\"" Sep 4 00:07:14.553375 containerd[1554]: time="2025-09-04T00:07:14.553205315Z" level=info msg="StartContainer for \"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\"" Sep 4 00:07:14.562153 containerd[1554]: time="2025-09-04T00:07:14.562069126Z" level=info msg="connecting to shim de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052" address="unix:///run/containerd/s/bae41dff553b90ecd360a98aabd99ae8f8fb4971a596f8037f255d164c0f8256" protocol=ttrpc version=3 Sep 4 00:07:14.622209 systemd[1]: Started cri-containerd-de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052.scope - libcontainer container de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052. Sep 4 00:07:14.740466 containerd[1554]: time="2025-09-04T00:07:14.739512248Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:14.741083 containerd[1554]: time="2025-09-04T00:07:14.741035182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 4 00:07:14.751198 containerd[1554]: time="2025-09-04T00:07:14.751006163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 245.042681ms" Sep 4 00:07:14.751198 containerd[1554]: time="2025-09-04T00:07:14.751090915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 00:07:14.756467 containerd[1554]: time="2025-09-04T00:07:14.755072309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 4 00:07:14.760754 containerd[1554]: time="2025-09-04T00:07:14.760692096Z" level=info msg="StartContainer for \"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\" returns successfully" Sep 4 00:07:14.761969 containerd[1554]: time="2025-09-04T00:07:14.761894761Z" level=info msg="CreateContainer within sandbox \"c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 00:07:14.776456 containerd[1554]: time="2025-09-04T00:07:14.776343785Z" level=info msg="Container 2420145475ae9c45253bf7cd45a1d332f46b7604b3770e179950eafcb7a4ab2c: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:14.797191 containerd[1554]: time="2025-09-04T00:07:14.796986284Z" level=info msg="CreateContainer within sandbox \"c792f89f50c67b34d5c2c2c3604d5b1e2b8c15301eca6b6829109ddc1f54b9dd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2420145475ae9c45253bf7cd45a1d332f46b7604b3770e179950eafcb7a4ab2c\"" Sep 4 00:07:14.800160 containerd[1554]: time="2025-09-04T00:07:14.800102006Z" level=info msg="StartContainer for \"2420145475ae9c45253bf7cd45a1d332f46b7604b3770e179950eafcb7a4ab2c\"" Sep 4 00:07:14.804590 containerd[1554]: time="2025-09-04T00:07:14.804523260Z" level=info msg="connecting to shim 2420145475ae9c45253bf7cd45a1d332f46b7604b3770e179950eafcb7a4ab2c" address="unix:///run/containerd/s/8ee2e186b8d1c44bc2c46520f38a5b06e5562cc859d1f2bdeb9d8120f341431a" protocol=ttrpc version=3 Sep 4 00:07:14.857058 systemd[1]: Started cri-containerd-2420145475ae9c45253bf7cd45a1d332f46b7604b3770e179950eafcb7a4ab2c.scope - libcontainer container 2420145475ae9c45253bf7cd45a1d332f46b7604b3770e179950eafcb7a4ab2c. Sep 4 00:07:14.971945 kubelet[2802]: I0904 00:07:14.971788 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-hgxtn" podStartSLOduration=29.25627961 podStartE2EDuration="41.971492474s" podCreationTimestamp="2025-09-04 00:06:33 +0000 UTC" firstStartedPulling="2025-09-04 00:07:01.471371833 +0000 UTC m=+50.612062071" lastFinishedPulling="2025-09-04 00:07:14.186584676 +0000 UTC m=+63.327274935" observedRunningTime="2025-09-04 00:07:14.969732721 +0000 UTC m=+64.110423122" watchObservedRunningTime="2025-09-04 00:07:14.971492474 +0000 UTC m=+64.112182735" Sep 4 00:07:15.009715 kubelet[2802]: I0904 00:07:15.007048 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b7454849d-pjblp" podStartSLOduration=36.333201846 podStartE2EDuration="49.006877428s" podCreationTimestamp="2025-09-04 00:06:26 +0000 UTC" firstStartedPulling="2025-09-04 00:07:01.830254178 +0000 UTC m=+50.970944429" lastFinishedPulling="2025-09-04 00:07:14.503929758 +0000 UTC m=+63.644620011" observedRunningTime="2025-09-04 00:07:15.005767775 +0000 UTC m=+64.146458036" watchObservedRunningTime="2025-09-04 00:07:15.006877428 +0000 UTC m=+64.147567684" Sep 4 00:07:15.090895 containerd[1554]: time="2025-09-04T00:07:15.090828598Z" level=info msg="StartContainer for \"2420145475ae9c45253bf7cd45a1d332f46b7604b3770e179950eafcb7a4ab2c\" returns successfully" Sep 4 00:07:16.004304 kubelet[2802]: I0904 00:07:16.003076 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dcfb98d55-psg7f" podStartSLOduration=36.847465128 podStartE2EDuration="49.00303661s" podCreationTimestamp="2025-09-04 00:06:27 +0000 UTC" firstStartedPulling="2025-09-04 00:07:02.599292156 +0000 UTC m=+51.739982399" lastFinishedPulling="2025-09-04 00:07:14.754863627 +0000 UTC m=+63.895553881" observedRunningTime="2025-09-04 00:07:15.999060074 +0000 UTC m=+65.139750341" watchObservedRunningTime="2025-09-04 00:07:16.00303661 +0000 UTC m=+65.143726877" Sep 4 00:07:16.614163 containerd[1554]: time="2025-09-04T00:07:16.614087958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dee49be162d302756d0c1cc7ecc8749cf401108fcabf67d59e78201477c38728\" id:\"92fa6a351b984b134f79f0700fd29127e5d436c7f59b7eaa2c47c1a0e7a3aee9\" pid:5248 exited_at:{seconds:1756944436 nanos:613095532}" Sep 4 00:07:16.770185 containerd[1554]: time="2025-09-04T00:07:16.770094634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:16.773220 containerd[1554]: time="2025-09-04T00:07:16.773153629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 4 00:07:16.776462 containerd[1554]: time="2025-09-04T00:07:16.774519569Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:16.780533 containerd[1554]: time="2025-09-04T00:07:16.780467170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:16.782796 containerd[1554]: time="2025-09-04T00:07:16.782570130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.027438154s" Sep 4 00:07:16.782796 containerd[1554]: time="2025-09-04T00:07:16.782643150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 4 00:07:16.792413 containerd[1554]: time="2025-09-04T00:07:16.792336340Z" level=info msg="CreateContainer within sandbox \"7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 00:07:16.818471 containerd[1554]: time="2025-09-04T00:07:16.817593588Z" level=info msg="Container d81c3056551f6bbc0afca227877a07dc0e2ff5612c3d86b49adfd7f742f9ba37: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:16.852393 containerd[1554]: time="2025-09-04T00:07:16.852314839Z" level=info msg="CreateContainer within sandbox \"7c007e9640c09ab0b722c0ee7f06333de66da478132baa74bf47d39e2639d1ab\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d81c3056551f6bbc0afca227877a07dc0e2ff5612c3d86b49adfd7f742f9ba37\"" Sep 4 00:07:16.868140 containerd[1554]: time="2025-09-04T00:07:16.867869778Z" level=info msg="StartContainer for \"d81c3056551f6bbc0afca227877a07dc0e2ff5612c3d86b49adfd7f742f9ba37\"" Sep 4 00:07:16.876902 containerd[1554]: time="2025-09-04T00:07:16.876832488Z" level=info msg="connecting to shim d81c3056551f6bbc0afca227877a07dc0e2ff5612c3d86b49adfd7f742f9ba37" address="unix:///run/containerd/s/c1a46aef9466b04ada5020d397700fcd272a1d82924ddb90f8cbf8a33e4961c5" protocol=ttrpc version=3 Sep 4 00:07:16.939342 systemd[1]: Started cri-containerd-d81c3056551f6bbc0afca227877a07dc0e2ff5612c3d86b49adfd7f742f9ba37.scope - libcontainer container d81c3056551f6bbc0afca227877a07dc0e2ff5612c3d86b49adfd7f742f9ba37. Sep 4 00:07:16.989241 kubelet[2802]: I0904 00:07:16.989189 2802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:07:17.169489 containerd[1554]: time="2025-09-04T00:07:17.167792431Z" level=info msg="StartContainer for \"d81c3056551f6bbc0afca227877a07dc0e2ff5612c3d86b49adfd7f742f9ba37\" returns successfully" Sep 4 00:07:17.350314 kubelet[2802]: I0904 00:07:17.350264 2802 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 00:07:17.350314 kubelet[2802]: I0904 00:07:17.350321 2802 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 00:07:19.853496 kubelet[2802]: I0904 00:07:19.852984 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-879lm" podStartSLOduration=31.561480457000002 podStartE2EDuration="47.85294134s" podCreationTimestamp="2025-09-04 00:06:32 +0000 UTC" firstStartedPulling="2025-09-04 00:07:00.494933012 +0000 UTC m=+49.635623251" lastFinishedPulling="2025-09-04 00:07:16.786393893 +0000 UTC m=+65.927084134" observedRunningTime="2025-09-04 00:07:18.028773932 +0000 UTC m=+67.169464193" watchObservedRunningTime="2025-09-04 00:07:19.85294134 +0000 UTC m=+68.993631601" Sep 4 00:07:20.068906 systemd[1]: Started sshd@9-10.128.0.26:22-147.75.109.163:50772.service - OpenSSH per-connection server daemon (147.75.109.163:50772). Sep 4 00:07:20.418085 sshd[5306]: Accepted publickey for core from 147.75.109.163 port 50772 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:07:20.422159 sshd-session[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:20.432293 systemd-logind[1522]: New session 10 of user core. Sep 4 00:07:20.439749 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 00:07:20.845034 sshd[5312]: Connection closed by 147.75.109.163 port 50772 Sep 4 00:07:20.846580 sshd-session[5306]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:20.861034 systemd[1]: sshd@9-10.128.0.26:22-147.75.109.163:50772.service: Deactivated successfully. Sep 4 00:07:20.867052 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 00:07:20.872503 systemd-logind[1522]: Session 10 logged out. Waiting for processes to exit. Sep 4 00:07:20.875888 systemd-logind[1522]: Removed session 10. Sep 4 00:07:24.657609 containerd[1554]: time="2025-09-04T00:07:24.657272808Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14cabbca30963e1b99c50229390974477bd54d1680806762f13290943b02dba2\" id:\"6f84e02ff38e43db857bc78119a6670fbfe514b98da506ca918d89affb256851\" pid:5343 exited_at:{seconds:1756944444 nanos:656286536}" Sep 4 00:07:25.908342 systemd[1]: Started sshd@10-10.128.0.26:22-147.75.109.163:50786.service - OpenSSH per-connection server daemon (147.75.109.163:50786). Sep 4 00:07:26.234945 sshd[5356]: Accepted publickey for core from 147.75.109.163 port 50786 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:07:26.237564 sshd-session[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:26.248196 systemd-logind[1522]: New session 11 of user core. Sep 4 00:07:26.257110 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 00:07:26.549457 sshd[5360]: Connection closed by 147.75.109.163 port 50786 Sep 4 00:07:26.548860 sshd-session[5356]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:26.557291 systemd[1]: sshd@10-10.128.0.26:22-147.75.109.163:50786.service: Deactivated successfully. Sep 4 00:07:26.564986 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 00:07:26.571033 systemd-logind[1522]: Session 11 logged out. Waiting for processes to exit. Sep 4 00:07:26.573090 systemd-logind[1522]: Removed session 11. Sep 4 00:07:31.619904 systemd[1]: Started sshd@11-10.128.0.26:22-147.75.109.163:33368.service - OpenSSH per-connection server daemon (147.75.109.163:33368). Sep 4 00:07:31.959750 sshd[5375]: Accepted publickey for core from 147.75.109.163 port 33368 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:07:31.963245 sshd-session[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:31.978561 systemd-logind[1522]: New session 12 of user core. Sep 4 00:07:31.986124 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 00:07:32.361829 sshd[5377]: Connection closed by 147.75.109.163 port 33368 Sep 4 00:07:32.362569 sshd-session[5375]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:32.371795 systemd[1]: sshd@11-10.128.0.26:22-147.75.109.163:33368.service: Deactivated successfully. Sep 4 00:07:32.376215 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 00:07:32.378130 systemd-logind[1522]: Session 12 logged out. Waiting for processes to exit. Sep 4 00:07:32.383304 systemd-logind[1522]: Removed session 12. Sep 4 00:07:32.421611 systemd[1]: Started sshd@12-10.128.0.26:22-147.75.109.163:33372.service - OpenSSH per-connection server daemon (147.75.109.163:33372). Sep 4 00:07:32.767094 sshd[5390]: Accepted publickey for core from 147.75.109.163 port 33372 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:07:32.776663 sshd-session[5390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:32.809454 systemd-logind[1522]: New session 13 of user core. Sep 4 00:07:32.814870 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 00:07:33.193308 sshd[5392]: Connection closed by 147.75.109.163 port 33372 Sep 4 00:07:33.195096 sshd-session[5390]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:33.211778 systemd[1]: sshd@12-10.128.0.26:22-147.75.109.163:33372.service: Deactivated successfully. Sep 4 00:07:33.215621 systemd-logind[1522]: Session 13 logged out. Waiting for processes to exit. Sep 4 00:07:33.225494 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 00:07:33.233613 systemd-logind[1522]: Removed session 13. Sep 4 00:07:33.255844 systemd[1]: Started sshd@13-10.128.0.26:22-147.75.109.163:33376.service - OpenSSH per-connection server daemon (147.75.109.163:33376). Sep 4 00:07:33.602473 sshd[5402]: Accepted publickey for core from 147.75.109.163 port 33376 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:07:33.605264 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:33.619774 systemd-logind[1522]: New session 14 of user core. Sep 4 00:07:33.631651 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 00:07:34.005780 sshd[5404]: Connection closed by 147.75.109.163 port 33376 Sep 4 00:07:34.008768 sshd-session[5402]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:34.019120 systemd-logind[1522]: Session 14 logged out. Waiting for processes to exit. Sep 4 00:07:34.020462 systemd[1]: sshd@13-10.128.0.26:22-147.75.109.163:33376.service: Deactivated successfully. Sep 4 00:07:34.028915 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 00:07:34.036970 systemd-logind[1522]: Removed session 14. Sep 4 00:07:36.935781 containerd[1554]: time="2025-09-04T00:07:36.935654761Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dee49be162d302756d0c1cc7ecc8749cf401108fcabf67d59e78201477c38728\" id:\"c82c8859407e0f43b401b04a2745bd76b44833116a65e548d9716eb81398c238\" pid:5428 exited_at:{seconds:1756944456 nanos:934407903}" Sep 4 00:07:38.663510 kubelet[2802]: I0904 00:07:38.663121 2802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:07:38.764421 containerd[1554]: time="2025-09-04T00:07:38.764060856Z" level=info msg="StopContainer for \"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\" with timeout 30 (s)" Sep 4 00:07:38.769010 containerd[1554]: time="2025-09-04T00:07:38.768802115Z" level=info msg="Stop container \"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\" with signal terminated" Sep 4 00:07:38.900324 systemd[1]: cri-containerd-de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052.scope: Deactivated successfully. Sep 4 00:07:38.901921 systemd[1]: cri-containerd-de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052.scope: Consumed 2.579s CPU time, 46.5M memory peak. Sep 4 00:07:38.909649 containerd[1554]: time="2025-09-04T00:07:38.909366762Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\" id:\"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\" pid:5162 exit_status:1 exited_at:{seconds:1756944458 nanos:906634971}" Sep 4 00:07:38.909649 containerd[1554]: time="2025-09-04T00:07:38.909533407Z" level=info msg="received exit event container_id:\"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\" id:\"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\" pid:5162 exit_status:1 exited_at:{seconds:1756944458 nanos:906634971}" Sep 4 00:07:38.969513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052-rootfs.mount: Deactivated successfully. Sep 4 00:07:39.074360 systemd[1]: Started sshd@14-10.128.0.26:22-147.75.109.163:33384.service - OpenSSH per-connection server daemon (147.75.109.163:33384). Sep 4 00:07:39.412778 sshd[5468]: Accepted publickey for core from 147.75.109.163 port 33384 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:07:39.415777 sshd-session[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:39.430168 systemd-logind[1522]: New session 15 of user core. Sep 4 00:07:39.440842 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 00:07:39.745998 sshd[5470]: Connection closed by 147.75.109.163 port 33384 Sep 4 00:07:39.747350 sshd-session[5468]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:39.756609 systemd[1]: sshd@14-10.128.0.26:22-147.75.109.163:33384.service: Deactivated successfully. Sep 4 00:07:39.760929 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 00:07:39.762822 systemd-logind[1522]: Session 15 logged out. Waiting for processes to exit. Sep 4 00:07:39.766712 systemd-logind[1522]: Removed session 15. Sep 4 00:07:40.526099 containerd[1554]: time="2025-09-04T00:07:40.525971654Z" level=info msg="StopContainer for \"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\" returns successfully" Sep 4 00:07:40.528378 containerd[1554]: time="2025-09-04T00:07:40.527577026Z" level=info msg="StopPodSandbox for \"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\"" Sep 4 00:07:40.528378 containerd[1554]: time="2025-09-04T00:07:40.527719047Z" level=info msg="Container to stop \"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:07:40.545200 systemd[1]: cri-containerd-5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e.scope: Deactivated successfully. Sep 4 00:07:40.553046 containerd[1554]: time="2025-09-04T00:07:40.552881854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\" id:\"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\" pid:4719 exit_status:137 exited_at:{seconds:1756944460 nanos:552007083}" Sep 4 00:07:40.610703 containerd[1554]: time="2025-09-04T00:07:40.610622148Z" level=info msg="shim disconnected" id=5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e namespace=k8s.io Sep 4 00:07:40.610703 containerd[1554]: time="2025-09-04T00:07:40.610701458Z" level=warning msg="cleaning up after shim disconnected" id=5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e namespace=k8s.io Sep 4 00:07:40.611040 containerd[1554]: time="2025-09-04T00:07:40.610715066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 00:07:40.616598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e-rootfs.mount: Deactivated successfully. Sep 4 00:07:40.649317 containerd[1554]: time="2025-09-04T00:07:40.649032404Z" level=error msg="Failed to handle event container_id:\"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\" id:\"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\" pid:4719 exit_status:137 exited_at:{seconds:1756944460 nanos:552007083} for 5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Sep 4 00:07:40.649317 containerd[1554]: time="2025-09-04T00:07:40.649124668Z" level=info msg="received exit event sandbox_id:\"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\" exit_status:137 exited_at:{seconds:1756944460 nanos:552007083}" Sep 4 00:07:40.656592 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e-shm.mount: Deactivated successfully. Sep 4 00:07:40.760475 systemd-networkd[1457]: cali70826c71b6e: Link DOWN Sep 4 00:07:40.760493 systemd-networkd[1457]: cali70826c71b6e: Lost carrier Sep 4 00:07:40.934099 containerd[1554]: 2025-09-04 00:07:40.754 [INFO][5533] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Sep 4 00:07:40.934099 containerd[1554]: 2025-09-04 00:07:40.755 [INFO][5533] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" iface="eth0" netns="/var/run/netns/cni-4641e1d6-73f3-87f8-a0a1-fb95a13122d8" Sep 4 00:07:40.934099 containerd[1554]: 2025-09-04 00:07:40.757 [INFO][5533] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" iface="eth0" netns="/var/run/netns/cni-4641e1d6-73f3-87f8-a0a1-fb95a13122d8" Sep 4 00:07:40.934099 containerd[1554]: 2025-09-04 00:07:40.769 [INFO][5533] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" after=12.554179ms iface="eth0" netns="/var/run/netns/cni-4641e1d6-73f3-87f8-a0a1-fb95a13122d8" Sep 4 00:07:40.934099 containerd[1554]: 2025-09-04 00:07:40.769 [INFO][5533] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Sep 4 00:07:40.934099 containerd[1554]: 2025-09-04 00:07:40.769 [INFO][5533] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Sep 4 00:07:40.934099 containerd[1554]: 2025-09-04 00:07:40.844 [INFO][5541] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" HandleID="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:07:40.934099 containerd[1554]: 2025-09-04 00:07:40.844 [INFO][5541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:07:40.934099 containerd[1554]: 2025-09-04 00:07:40.844 [INFO][5541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:07:40.934099 containerd[1554]: 2025-09-04 00:07:40.921 [INFO][5541] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" HandleID="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:07:40.934099 containerd[1554]: 2025-09-04 00:07:40.921 [INFO][5541] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" HandleID="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:07:40.934099 containerd[1554]: 2025-09-04 00:07:40.924 [INFO][5541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:07:40.934099 containerd[1554]: 2025-09-04 00:07:40.928 [INFO][5533] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Sep 4 00:07:40.940515 containerd[1554]: time="2025-09-04T00:07:40.940422170Z" level=info msg="TearDown network for sandbox \"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\" successfully" Sep 4 00:07:40.941029 containerd[1554]: time="2025-09-04T00:07:40.940780272Z" level=info msg="StopPodSandbox for \"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\" returns successfully" Sep 4 00:07:40.943760 systemd[1]: run-netns-cni\x2d4641e1d6\x2d73f3\x2d87f8\x2da0a1\x2dfb95a13122d8.mount: Deactivated successfully. Sep 4 00:07:41.029864 containerd[1554]: time="2025-09-04T00:07:41.029790721Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d87c22de67da92e901fee7f92141a0844b634bc1acae0347818e956f0a07ff3\" id:\"341e0ad6a0de26bea36277a437662ee520e7855afa355f4f1ec5e6b2ae944f0a\" pid:5569 exited_at:{seconds:1756944461 nanos:28878392}" Sep 4 00:07:41.072080 kubelet[2802]: I0904 00:07:41.071975 2802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd7f54d6-d680-45c7-b829-05311bfb335e-calico-apiserver-certs\") pod \"bd7f54d6-d680-45c7-b829-05311bfb335e\" (UID: \"bd7f54d6-d680-45c7-b829-05311bfb335e\") " Sep 4 00:07:41.074586 kubelet[2802]: I0904 00:07:41.072119 2802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkmmd\" (UniqueName: \"kubernetes.io/projected/bd7f54d6-d680-45c7-b829-05311bfb335e-kube-api-access-gkmmd\") pod \"bd7f54d6-d680-45c7-b829-05311bfb335e\" (UID: \"bd7f54d6-d680-45c7-b829-05311bfb335e\") " Sep 4 00:07:41.083489 kubelet[2802]: I0904 00:07:41.083338 2802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7f54d6-d680-45c7-b829-05311bfb335e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "bd7f54d6-d680-45c7-b829-05311bfb335e" (UID: "bd7f54d6-d680-45c7-b829-05311bfb335e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 00:07:41.084223 systemd[1]: var-lib-kubelet-pods-bd7f54d6\x2dd680\x2d45c7\x2db829\x2d05311bfb335e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgkmmd.mount: Deactivated successfully. Sep 4 00:07:41.084620 kubelet[2802]: I0904 00:07:41.084573 2802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7f54d6-d680-45c7-b829-05311bfb335e-kube-api-access-gkmmd" (OuterVolumeSpecName: "kube-api-access-gkmmd") pod "bd7f54d6-d680-45c7-b829-05311bfb335e" (UID: "bd7f54d6-d680-45c7-b829-05311bfb335e"). InnerVolumeSpecName "kube-api-access-gkmmd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:07:41.093425 systemd[1]: var-lib-kubelet-pods-bd7f54d6\x2dd680\x2d45c7\x2db829\x2d05311bfb335e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 4 00:07:41.102723 kubelet[2802]: I0904 00:07:41.102302 2802 scope.go:117] "RemoveContainer" containerID="de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052" Sep 4 00:07:41.108470 containerd[1554]: time="2025-09-04T00:07:41.107802278Z" level=info msg="RemoveContainer for \"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\"" Sep 4 00:07:41.120822 systemd[1]: Removed slice kubepods-besteffort-podbd7f54d6_d680_45c7_b829_05311bfb335e.slice - libcontainer container kubepods-besteffort-podbd7f54d6_d680_45c7_b829_05311bfb335e.slice. Sep 4 00:07:41.121506 systemd[1]: kubepods-besteffort-podbd7f54d6_d680_45c7_b829_05311bfb335e.slice: Consumed 2.659s CPU time, 46.7M memory peak. Sep 4 00:07:41.122241 containerd[1554]: time="2025-09-04T00:07:41.122069423Z" level=info msg="RemoveContainer for \"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\" returns successfully" Sep 4 00:07:41.124007 kubelet[2802]: I0904 00:07:41.123960 2802 scope.go:117] "RemoveContainer" containerID="de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052" Sep 4 00:07:41.125555 containerd[1554]: time="2025-09-04T00:07:41.125482062Z" level=error msg="ContainerStatus for \"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\": not found" Sep 4 00:07:41.126844 kubelet[2802]: E0904 00:07:41.125811 2802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\": not found" containerID="de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052" Sep 4 00:07:41.126844 kubelet[2802]: I0904 00:07:41.125867 2802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052"} err="failed to get container status \"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\": rpc error: code = NotFound desc = an error occurred when try to find container \"de68256286fafa0ab14acc72886b88d52c2443d0f43d911c77d1ab3d205e4052\": not found" Sep 4 00:07:41.172176 kubelet[2802]: I0904 00:07:41.172100 2802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7f54d6-d680-45c7-b829-05311bfb335e" path="/var/lib/kubelet/pods/bd7f54d6-d680-45c7-b829-05311bfb335e/volumes" Sep 4 00:07:41.173071 kubelet[2802]: I0904 00:07:41.173036 2802 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd7f54d6-d680-45c7-b829-05311bfb335e-calico-apiserver-certs\") on node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" DevicePath \"\"" Sep 4 00:07:41.173326 kubelet[2802]: I0904 00:07:41.173073 2802 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gkmmd\" (UniqueName: \"kubernetes.io/projected/bd7f54d6-d680-45c7-b829-05311bfb335e-kube-api-access-gkmmd\") on node \"ci-4372-1-0-nightly-20250903-2100-efcf35bfae9208b4dddc\" DevicePath \"\"" Sep 4 00:07:42.121288 containerd[1554]: time="2025-09-04T00:07:42.121181173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\" id:\"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\" pid:4719 exit_status:137 exited_at:{seconds:1756944460 nanos:552007083}" Sep 4 00:07:42.923408 containerd[1554]: time="2025-09-04T00:07:42.923182836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d87c22de67da92e901fee7f92141a0844b634bc1acae0347818e956f0a07ff3\" id:\"be651faa79c4048b1dbecf88ccd48771db60aec234f0eb6e0591b43684ae78d3\" pid:5596 exited_at:{seconds:1756944462 nanos:922049033}" Sep 4 00:07:43.466355 ntpd[1518]: Deleting interface #15 cali70826c71b6e, fe80::ecee:eeff:feee:eeee%13#123, interface stats: received=0, sent=0, dropped=0, active_time=38 secs Sep 4 00:07:43.466951 ntpd[1518]: 4 Sep 00:07:43 ntpd[1518]: Deleting interface #15 cali70826c71b6e, fe80::ecee:eeff:feee:eeee%13#123, interface stats: received=0, sent=0, dropped=0, active_time=38 secs Sep 4 00:07:44.803044 systemd[1]: Started sshd@15-10.128.0.26:22-147.75.109.163:48236.service - OpenSSH per-connection server daemon (147.75.109.163:48236). Sep 4 00:07:45.124531 sshd[5606]: Accepted publickey for core from 147.75.109.163 port 48236 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:07:45.127574 sshd-session[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:45.139454 systemd-logind[1522]: New session 16 of user core. Sep 4 00:07:45.150825 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 00:07:45.443483 sshd[5608]: Connection closed by 147.75.109.163 port 48236 Sep 4 00:07:45.445235 sshd-session[5606]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:45.452117 systemd[1]: sshd@15-10.128.0.26:22-147.75.109.163:48236.service: Deactivated successfully. Sep 4 00:07:45.457214 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 00:07:45.462334 systemd-logind[1522]: Session 16 logged out. Waiting for processes to exit. Sep 4 00:07:45.464910 systemd-logind[1522]: Removed session 16. Sep 4 00:07:46.112405 containerd[1554]: time="2025-09-04T00:07:46.112302866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dee49be162d302756d0c1cc7ecc8749cf401108fcabf67d59e78201477c38728\" id:\"e7cc70d5c86794f03a78d32b168fa1ec51d0f20f56b8627a7d7281c643aaa03d\" pid:5634 exited_at:{seconds:1756944466 nanos:111589605}" Sep 4 00:07:50.502971 systemd[1]: Started sshd@16-10.128.0.26:22-147.75.109.163:42504.service - OpenSSH per-connection server daemon (147.75.109.163:42504). Sep 4 00:07:50.833381 sshd[5648]: Accepted publickey for core from 147.75.109.163 port 42504 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:07:50.836290 sshd-session[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:50.853572 systemd-logind[1522]: New session 17 of user core. Sep 4 00:07:50.859845 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 00:07:51.231389 sshd[5650]: Connection closed by 147.75.109.163 port 42504 Sep 4 00:07:51.232776 sshd-session[5648]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:51.249133 systemd[1]: sshd@16-10.128.0.26:22-147.75.109.163:42504.service: Deactivated successfully. Sep 4 00:07:51.259392 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 00:07:51.263900 systemd-logind[1522]: Session 17 logged out. Waiting for processes to exit. Sep 4 00:07:51.267673 systemd-logind[1522]: Removed session 17. Sep 4 00:07:54.725276 containerd[1554]: time="2025-09-04T00:07:54.725202072Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14cabbca30963e1b99c50229390974477bd54d1680806762f13290943b02dba2\" id:\"a61dd76bcb49c619c0d89237a148bf5f4dcb53b56fd9eebbc280541fc7447622\" pid:5676 exited_at:{seconds:1756944474 nanos:724249629}" Sep 4 00:07:56.297905 systemd[1]: Started sshd@17-10.128.0.26:22-147.75.109.163:42514.service - OpenSSH per-connection server daemon (147.75.109.163:42514). Sep 4 00:07:56.659488 sshd[5688]: Accepted publickey for core from 147.75.109.163 port 42514 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:07:56.660877 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:56.671516 systemd-logind[1522]: New session 18 of user core. Sep 4 00:07:56.678944 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 00:07:57.103474 sshd[5691]: Connection closed by 147.75.109.163 port 42514 Sep 4 00:07:57.107872 sshd-session[5688]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:57.119550 systemd[1]: sshd@17-10.128.0.26:22-147.75.109.163:42514.service: Deactivated successfully. Sep 4 00:07:57.120119 systemd-logind[1522]: Session 18 logged out. Waiting for processes to exit. Sep 4 00:07:57.128888 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 00:07:57.140002 systemd-logind[1522]: Removed session 18. Sep 4 00:07:57.171157 systemd[1]: Started sshd@18-10.128.0.26:22-147.75.109.163:42524.service - OpenSSH per-connection server daemon (147.75.109.163:42524). Sep 4 00:07:57.541378 sshd[5703]: Accepted publickey for core from 147.75.109.163 port 42524 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:07:57.545599 sshd-session[5703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:57.559248 systemd-logind[1522]: New session 19 of user core. Sep 4 00:07:57.567944 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 00:07:58.032491 sshd[5705]: Connection closed by 147.75.109.163 port 42524 Sep 4 00:07:58.033659 sshd-session[5703]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:58.048916 systemd[1]: sshd@18-10.128.0.26:22-147.75.109.163:42524.service: Deactivated successfully. Sep 4 00:07:58.055851 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 00:07:58.058800 systemd-logind[1522]: Session 19 logged out. Waiting for processes to exit. Sep 4 00:07:58.064929 systemd-logind[1522]: Removed session 19. Sep 4 00:07:58.097590 systemd[1]: Started sshd@19-10.128.0.26:22-147.75.109.163:42538.service - OpenSSH per-connection server daemon (147.75.109.163:42538). Sep 4 00:07:58.440871 sshd[5717]: Accepted publickey for core from 147.75.109.163 port 42538 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:07:58.446051 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:58.467376 systemd-logind[1522]: New session 20 of user core. Sep 4 00:07:58.473947 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 00:07:59.891270 sshd[5719]: Connection closed by 147.75.109.163 port 42538 Sep 4 00:07:59.894855 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:59.914840 systemd[1]: sshd@19-10.128.0.26:22-147.75.109.163:42538.service: Deactivated successfully. Sep 4 00:07:59.916703 systemd-logind[1522]: Session 20 logged out. Waiting for processes to exit. Sep 4 00:07:59.926377 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 00:07:59.956946 systemd-logind[1522]: Removed session 20. Sep 4 00:07:59.959115 systemd[1]: Started sshd@20-10.128.0.26:22-147.75.109.163:42548.service - OpenSSH per-connection server daemon (147.75.109.163:42548). Sep 4 00:08:00.334474 sshd[5733]: Accepted publickey for core from 147.75.109.163 port 42548 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:08:00.335649 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:00.351867 systemd-logind[1522]: New session 21 of user core. Sep 4 00:08:00.359806 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 00:08:01.024507 sshd[5738]: Connection closed by 147.75.109.163 port 42548 Sep 4 00:08:01.026280 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:01.039417 systemd[1]: sshd@20-10.128.0.26:22-147.75.109.163:42548.service: Deactivated successfully. Sep 4 00:08:01.041246 systemd-logind[1522]: Session 21 logged out. Waiting for processes to exit. Sep 4 00:08:01.049731 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 00:08:01.060353 systemd-logind[1522]: Removed session 21. Sep 4 00:08:01.090736 systemd[1]: Started sshd@21-10.128.0.26:22-147.75.109.163:49580.service - OpenSSH per-connection server daemon (147.75.109.163:49580). Sep 4 00:08:01.453907 sshd[5750]: Accepted publickey for core from 147.75.109.163 port 49580 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:08:01.456918 sshd-session[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:01.467861 systemd-logind[1522]: New session 22 of user core. Sep 4 00:08:01.477791 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 00:08:01.848785 sshd[5752]: Connection closed by 147.75.109.163 port 49580 Sep 4 00:08:01.851805 sshd-session[5750]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:01.864036 systemd[1]: sshd@21-10.128.0.26:22-147.75.109.163:49580.service: Deactivated successfully. Sep 4 00:08:01.865818 systemd-logind[1522]: Session 22 logged out. Waiting for processes to exit. Sep 4 00:08:01.875540 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 00:08:01.886369 systemd-logind[1522]: Removed session 22. Sep 4 00:08:06.912039 systemd[1]: Started sshd@22-10.128.0.26:22-147.75.109.163:49590.service - OpenSSH per-connection server daemon (147.75.109.163:49590). Sep 4 00:08:07.263410 sshd[5764]: Accepted publickey for core from 147.75.109.163 port 49590 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:08:07.267472 sshd-session[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:07.285085 systemd-logind[1522]: New session 23 of user core. Sep 4 00:08:07.292888 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 00:08:07.645006 sshd[5766]: Connection closed by 147.75.109.163 port 49590 Sep 4 00:08:07.646785 sshd-session[5764]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:07.658684 systemd[1]: sshd@22-10.128.0.26:22-147.75.109.163:49590.service: Deactivated successfully. Sep 4 00:08:07.659009 systemd-logind[1522]: Session 23 logged out. Waiting for processes to exit. Sep 4 00:08:07.668593 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 00:08:07.679118 systemd-logind[1522]: Removed session 23. Sep 4 00:08:11.025475 containerd[1554]: time="2025-09-04T00:08:11.024600578Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d87c22de67da92e901fee7f92141a0844b634bc1acae0347818e956f0a07ff3\" id:\"7eeb02f85366696ceffc91234dd92070f1809f826eed4b4d11eb2265a50f399d\" pid:5791 exited_at:{seconds:1756944491 nanos:23726515}" Sep 4 00:08:11.139699 containerd[1554]: time="2025-09-04T00:08:11.139612652Z" level=info msg="StopPodSandbox for \"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\"" Sep 4 00:08:11.420153 containerd[1554]: 2025-09-04 00:08:11.285 [WARNING][5808] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:08:11.420153 containerd[1554]: 2025-09-04 00:08:11.285 [INFO][5808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Sep 4 00:08:11.420153 containerd[1554]: 2025-09-04 00:08:11.285 [INFO][5808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" iface="eth0" netns="" Sep 4 00:08:11.420153 containerd[1554]: 2025-09-04 00:08:11.286 [INFO][5808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Sep 4 00:08:11.420153 containerd[1554]: 2025-09-04 00:08:11.286 [INFO][5808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Sep 4 00:08:11.420153 containerd[1554]: 2025-09-04 00:08:11.364 [INFO][5817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" HandleID="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:08:11.420153 containerd[1554]: 2025-09-04 00:08:11.365 [INFO][5817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:08:11.420153 containerd[1554]: 2025-09-04 00:08:11.366 [INFO][5817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:08:11.420153 containerd[1554]: 2025-09-04 00:08:11.392 [WARNING][5817] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" HandleID="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:08:11.420153 containerd[1554]: 2025-09-04 00:08:11.395 [INFO][5817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" HandleID="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:08:11.420153 containerd[1554]: 2025-09-04 00:08:11.403 [INFO][5817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:08:11.420153 containerd[1554]: 2025-09-04 00:08:11.415 [INFO][5808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Sep 4 00:08:11.421291 containerd[1554]: time="2025-09-04T00:08:11.421188839Z" level=info msg="TearDown network for sandbox \"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\" successfully" Sep 4 00:08:11.421291 containerd[1554]: time="2025-09-04T00:08:11.421260247Z" level=info msg="StopPodSandbox for \"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\" returns successfully" Sep 4 00:08:11.423619 containerd[1554]: time="2025-09-04T00:08:11.423543773Z" level=info msg="RemovePodSandbox for \"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\"" Sep 4 00:08:11.423930 containerd[1554]: time="2025-09-04T00:08:11.423768475Z" level=info msg="Forcibly stopping sandbox \"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\"" Sep 4 00:08:11.604480 containerd[1554]: 2025-09-04 00:08:11.504 [WARNING][5833] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" WorkloadEndpoint="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:08:11.604480 containerd[1554]: 2025-09-04 00:08:11.505 [INFO][5833] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Sep 4 00:08:11.604480 containerd[1554]: 2025-09-04 00:08:11.505 [INFO][5833] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" iface="eth0" netns="" Sep 4 00:08:11.604480 containerd[1554]: 2025-09-04 00:08:11.505 [INFO][5833] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Sep 4 00:08:11.604480 containerd[1554]: 2025-09-04 00:08:11.505 [INFO][5833] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Sep 4 00:08:11.604480 containerd[1554]: 2025-09-04 00:08:11.577 [INFO][5840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" HandleID="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:08:11.604480 containerd[1554]: 2025-09-04 00:08:11.578 [INFO][5840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:08:11.604480 containerd[1554]: 2025-09-04 00:08:11.578 [INFO][5840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:08:11.604480 containerd[1554]: 2025-09-04 00:08:11.594 [WARNING][5840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" HandleID="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:08:11.604480 containerd[1554]: 2025-09-04 00:08:11.594 [INFO][5840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" HandleID="k8s-pod-network.5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Workload="ci--4372--1--0--nightly--20250903--2100--efcf35bfae9208b4dddc-k8s-calico--apiserver--7b7454849d--pjblp-eth0" Sep 4 00:08:11.604480 containerd[1554]: 2025-09-04 00:08:11.598 [INFO][5840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:08:11.604480 containerd[1554]: 2025-09-04 00:08:11.601 [INFO][5833] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e" Sep 4 00:08:11.606539 containerd[1554]: time="2025-09-04T00:08:11.605561024Z" level=info msg="TearDown network for sandbox \"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\" successfully" Sep 4 00:08:11.609688 containerd[1554]: time="2025-09-04T00:08:11.609612538Z" level=info msg="Ensure that sandbox 5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e in task-service has been cleanup successfully" Sep 4 00:08:11.615804 containerd[1554]: time="2025-09-04T00:08:11.615720746Z" level=info msg="RemovePodSandbox \"5f5e2b15011c3a59ea076cf8d89463024769f96d605a88c2d7c7d47ac767a67e\" returns successfully" Sep 4 00:08:12.713743 systemd[1]: Started sshd@23-10.128.0.26:22-147.75.109.163:57258.service - OpenSSH per-connection server daemon (147.75.109.163:57258). Sep 4 00:08:13.075814 sshd[5847]: Accepted publickey for core from 147.75.109.163 port 57258 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:08:13.077108 sshd-session[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:13.108070 systemd-logind[1522]: New session 24 of user core. Sep 4 00:08:13.112955 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 00:08:13.456611 sshd[5849]: Connection closed by 147.75.109.163 port 57258 Sep 4 00:08:13.457721 sshd-session[5847]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:13.471396 systemd-logind[1522]: Session 24 logged out. Waiting for processes to exit. Sep 4 00:08:13.475828 systemd[1]: sshd@23-10.128.0.26:22-147.75.109.163:57258.service: Deactivated successfully. Sep 4 00:08:13.482688 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 00:08:13.494565 systemd-logind[1522]: Removed session 24. Sep 4 00:08:15.729791 update_engine[1523]: I20250904 00:08:15.729691 1523 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 4 00:08:15.729791 update_engine[1523]: I20250904 00:08:15.729776 1523 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 4 00:08:15.730753 update_engine[1523]: I20250904 00:08:15.730111 1523 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 4 00:08:15.733063 update_engine[1523]: I20250904 00:08:15.732841 1523 omaha_request_params.cc:62] Current group set to beta Sep 4 00:08:15.733868 update_engine[1523]: I20250904 00:08:15.733549 1523 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 4 00:08:15.733868 update_engine[1523]: I20250904 00:08:15.733589 1523 update_attempter.cc:643] Scheduling an action processor start. Sep 4 00:08:15.733868 update_engine[1523]: I20250904 00:08:15.733624 1523 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 00:08:15.733868 update_engine[1523]: I20250904 00:08:15.733695 1523 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 4 00:08:15.736534 update_engine[1523]: I20250904 00:08:15.734418 1523 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 00:08:15.736534 update_engine[1523]: I20250904 00:08:15.734496 1523 omaha_request_action.cc:272] Request: Sep 4 00:08:15.736534 update_engine[1523]: Sep 4 00:08:15.736534 update_engine[1523]: Sep 4 00:08:15.736534 update_engine[1523]: Sep 4 00:08:15.736534 update_engine[1523]: Sep 4 00:08:15.736534 update_engine[1523]: Sep 4 00:08:15.736534 update_engine[1523]: Sep 4 00:08:15.736534 update_engine[1523]: Sep 4 00:08:15.736534 update_engine[1523]: Sep 4 00:08:15.736534 update_engine[1523]: I20250904 00:08:15.734513 1523 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 00:08:15.740087 update_engine[1523]: I20250904 00:08:15.740036 1523 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 00:08:15.740779 locksmithd[1611]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 4 00:08:15.741791 update_engine[1523]: I20250904 00:08:15.741611 1523 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 00:08:15.764800 update_engine[1523]: E20250904 00:08:15.764703 1523 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 00:08:15.765073 update_engine[1523]: I20250904 00:08:15.764900 1523 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 4 00:08:16.184417 containerd[1554]: time="2025-09-04T00:08:16.182905917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dee49be162d302756d0c1cc7ecc8749cf401108fcabf67d59e78201477c38728\" id:\"9ee8ee1497c84230e8ff66ff76b3658bdaa41c29473a2bfc38d8cc05bbccd126\" pid:5872 exited_at:{seconds:1756944496 nanos:182267159}" Sep 4 00:08:18.519895 systemd[1]: Started sshd@24-10.128.0.26:22-147.75.109.163:57266.service - OpenSSH per-connection server daemon (147.75.109.163:57266). Sep 4 00:08:18.876736 sshd[5892]: Accepted publickey for core from 147.75.109.163 port 57266 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:08:18.882543 sshd-session[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:18.898900 systemd-logind[1522]: New session 25 of user core. Sep 4 00:08:18.907777 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 00:08:19.247017 sshd[5894]: Connection closed by 147.75.109.163 port 57266 Sep 4 00:08:19.249030 sshd-session[5892]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:19.264035 systemd[1]: sshd@24-10.128.0.26:22-147.75.109.163:57266.service: Deactivated successfully. Sep 4 00:08:19.268682 systemd-logind[1522]: Session 25 logged out. Waiting for processes to exit. Sep 4 00:08:19.273757 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 00:08:19.281066 systemd-logind[1522]: Removed session 25. Sep 4 00:08:19.711227 systemd[1]: Started sshd@25-10.128.0.26:22-66.175.213.4:27466.service - OpenSSH per-connection server daemon (66.175.213.4:27466). Sep 4 00:08:20.479482 sshd[5906]: Connection closed by 66.175.213.4 port 27466 [preauth] Sep 4 00:08:20.481667 systemd[1]: sshd@25-10.128.0.26:22-66.175.213.4:27466.service: Deactivated successfully. Sep 4 00:08:20.547783 systemd[1]: Started sshd@26-10.128.0.26:22-66.175.213.4:27482.service - OpenSSH per-connection server daemon (66.175.213.4:27482). Sep 4 00:08:21.269068 sshd[5911]: Connection closed by 66.175.213.4 port 27482 [preauth] Sep 4 00:08:21.273754 systemd[1]: sshd@26-10.128.0.26:22-66.175.213.4:27482.service: Deactivated successfully. Sep 4 00:08:21.332374 systemd[1]: Started sshd@27-10.128.0.26:22-66.175.213.4:27494.service - OpenSSH per-connection server daemon (66.175.213.4:27494). Sep 4 00:08:22.106203 sshd[5918]: Connection closed by 66.175.213.4 port 27494 [preauth] Sep 4 00:08:22.111323 systemd[1]: sshd@27-10.128.0.26:22-66.175.213.4:27494.service: Deactivated successfully.