Sep 12 17:40:59.079665 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:40:59.079707 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:40:59.079726 kernel: BIOS-provided physical RAM map: Sep 12 17:40:59.079740 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 12 17:40:59.079754 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 12 17:40:59.079768 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 12 17:40:59.079784 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 12 17:40:59.079802 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 12 17:40:59.079817 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Sep 12 17:40:59.079832 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Sep 12 17:40:59.079847 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Sep 12 17:40:59.079863 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Sep 12 17:40:59.079878 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 12 17:40:59.079910 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 12 17:40:59.079937 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 12 17:40:59.079953 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 12 17:40:59.079994 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 12 17:40:59.080010 kernel: NX (Execute Disable) protection: active Sep 12 17:40:59.080027 kernel: APIC: Static calls initialized Sep 12 17:40:59.080051 kernel: efi: EFI v2.7 by EDK II Sep 12 17:40:59.080067 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Sep 12 17:40:59.080084 kernel: SMBIOS 2.4 present. Sep 12 17:40:59.080101 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 12 17:40:59.080117 kernel: Hypervisor detected: KVM Sep 12 17:40:59.080138 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:40:59.080154 kernel: kvm-clock: using sched offset of 12407933091 cycles Sep 12 17:40:59.080170 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:40:59.080187 kernel: tsc: Detected 2299.998 MHz processor Sep 12 17:40:59.080204 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:40:59.080220 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:40:59.080237 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 12 17:40:59.080255 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 12 17:40:59.080272 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:40:59.080293 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 12 17:40:59.080310 kernel: Using GB pages for direct mapping Sep 12 17:40:59.080327 kernel: Secure boot disabled Sep 12 17:40:59.080344 kernel: ACPI: Early table checksum verification disabled Sep 12 17:40:59.080361 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 12 17:40:59.080378 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 12 17:40:59.080395 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 12 17:40:59.080419 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 12 17:40:59.080440 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 12 17:40:59.080458 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 12 17:40:59.080477 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 12 17:40:59.080495 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 12 17:40:59.080514 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 12 17:40:59.080532 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 12 17:40:59.080554 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 12 17:40:59.080572 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 12 17:40:59.080590 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 12 17:40:59.080608 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 12 17:40:59.080626 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 12 17:40:59.080644 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 12 17:40:59.080662 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 12 17:40:59.080681 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 12 17:40:59.080698 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 12 17:40:59.080720 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 12 17:40:59.080738 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:40:59.080757 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 17:40:59.080775 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 17:40:59.080793 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 12 17:40:59.080811 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 12 17:40:59.080831 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Sep 12 17:40:59.080849 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Sep 12 17:40:59.080866 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Sep 12 17:40:59.080889 kernel: Zone ranges: Sep 12 17:40:59.080907 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:40:59.080925 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 12 17:40:59.080944 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 12 17:40:59.080974 kernel: Movable zone start for each node Sep 12 17:40:59.081004 kernel: Early memory node ranges Sep 12 17:40:59.081022 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 12 17:40:59.081047 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 12 17:40:59.081066 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Sep 12 17:40:59.081089 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 12 17:40:59.081107 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 12 17:40:59.081125 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 12 17:40:59.081142 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:40:59.081160 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 12 17:40:59.081178 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 12 17:40:59.081198 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 17:40:59.081216 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 12 17:40:59.081234 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 12 17:40:59.081253 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:40:59.081275 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:40:59.081292 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:40:59.081308 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:40:59.081326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:40:59.081343 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:40:59.081361 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:40:59.081379 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:40:59.081396 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 12 17:40:59.081413 kernel: Booting paravirtualized kernel on KVM Sep 12 17:40:59.081435 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:40:59.081453 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:40:59.081471 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:40:59.081489 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:40:59.081506 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:40:59.081520 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:40:59.081539 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:40:59.081559 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:40:59.081582 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:40:59.081600 kernel: random: crng init done Sep 12 17:40:59.081618 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 12 17:40:59.081637 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:40:59.081655 kernel: Fallback order for Node 0: 0 Sep 12 17:40:59.081672 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Sep 12 17:40:59.081689 kernel: Policy zone: Normal Sep 12 17:40:59.081708 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:40:59.081727 kernel: software IO TLB: area num 2. Sep 12 17:40:59.081749 kernel: Memory: 7513400K/7860584K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 346924K reserved, 0K cma-reserved) Sep 12 17:40:59.081768 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:40:59.081786 kernel: Kernel/User page tables isolation: enabled Sep 12 17:40:59.081805 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:40:59.081822 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:40:59.081840 kernel: Dynamic Preempt: voluntary Sep 12 17:40:59.081859 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:40:59.081879 kernel: rcu: RCU event tracing is enabled. Sep 12 17:40:59.081916 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:40:59.081935 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:40:59.081955 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:40:59.081993 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:40:59.082009 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:40:59.082025 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:40:59.082048 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:40:59.082065 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:40:59.082081 kernel: Console: colour dummy device 80x25 Sep 12 17:40:59.082102 kernel: printk: console [ttyS0] enabled Sep 12 17:40:59.082120 kernel: ACPI: Core revision 20230628 Sep 12 17:40:59.082138 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:40:59.082156 kernel: x2apic enabled Sep 12 17:40:59.082175 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:40:59.082194 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 12 17:40:59.082211 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 12 17:40:59.082230 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 12 17:40:59.082260 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 12 17:40:59.082279 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 12 17:40:59.082296 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:40:59.082334 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 12 17:40:59.082353 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 12 17:40:59.082372 kernel: Spectre V2 : Mitigation: IBRS Sep 12 17:40:59.082391 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:40:59.082410 kernel: RETBleed: Mitigation: IBRS Sep 12 17:40:59.082429 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:40:59.082452 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 12 17:40:59.082471 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:40:59.082489 kernel: MDS: Mitigation: Clear CPU buffers Sep 12 17:40:59.082508 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:40:59.082526 kernel: active return thunk: its_return_thunk Sep 12 17:40:59.082545 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:40:59.082564 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:40:59.082583 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:40:59.082602 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:40:59.082624 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:40:59.082643 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 12 17:40:59.082662 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:40:59.082680 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:40:59.082699 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:40:59.082717 kernel: landlock: Up and running. Sep 12 17:40:59.082736 kernel: SELinux: Initializing. Sep 12 17:40:59.082755 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:40:59.082773 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:40:59.082795 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 12 17:40:59.082814 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:40:59.082832 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:40:59.082850 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:40:59.082868 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 12 17:40:59.082887 kernel: signal: max sigframe size: 1776 Sep 12 17:40:59.082905 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:40:59.082925 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:40:59.082943 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:40:59.082981 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:40:59.083000 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:40:59.083018 kernel: .... node #0, CPUs: #1 Sep 12 17:40:59.083046 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 12 17:40:59.083065 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:40:59.083084 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:40:59.083103 kernel: smpboot: Max logical packages: 1 Sep 12 17:40:59.083122 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 12 17:40:59.083144 kernel: devtmpfs: initialized Sep 12 17:40:59.083162 kernel: x86/mm: Memory block size: 128MB Sep 12 17:40:59.083181 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 12 17:40:59.083199 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:40:59.083218 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:40:59.083236 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:40:59.083254 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:40:59.083273 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:40:59.083291 kernel: audit: type=2000 audit(1757698858.133:1): state=initialized audit_enabled=0 res=1 Sep 12 17:40:59.083313 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:40:59.083332 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:40:59.083350 kernel: cpuidle: using governor menu Sep 12 17:40:59.083368 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:40:59.083387 kernel: dca service started, version 1.12.1 Sep 12 17:40:59.083405 kernel: PCI: Using configuration type 1 for base access Sep 12 17:40:59.083423 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:40:59.083442 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:40:59.083460 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:40:59.083482 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:40:59.083509 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:40:59.083528 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:40:59.083547 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:40:59.083564 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:40:59.083583 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 12 17:40:59.083601 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:40:59.083621 kernel: ACPI: Interpreter enabled Sep 12 17:40:59.083638 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 17:40:59.083662 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:40:59.083682 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:40:59.083701 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 12 17:40:59.083720 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 12 17:40:59.083739 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:40:59.084053 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:40:59.084276 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:40:59.084470 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:40:59.084500 kernel: PCI host bridge to bus 0000:00 Sep 12 17:40:59.084692 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:40:59.084867 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:40:59.085079 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:40:59.085254 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 12 17:40:59.085425 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:40:59.085632 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 17:40:59.085847 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Sep 12 17:40:59.086076 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 12 17:40:59.086287 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 12 17:40:59.086497 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Sep 12 17:40:59.086697 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Sep 12 17:40:59.086894 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Sep 12 17:40:59.087184 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:40:59.087391 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Sep 12 17:40:59.087586 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Sep 12 17:40:59.087789 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:40:59.088026 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 12 17:40:59.088233 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Sep 12 17:40:59.088264 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:40:59.088285 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:40:59.088303 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:40:59.088322 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:40:59.088343 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:40:59.088363 kernel: iommu: Default domain type: Translated Sep 12 17:40:59.088383 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:40:59.088403 kernel: efivars: Registered efivars operations Sep 12 17:40:59.088421 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:40:59.088441 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:40:59.088464 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 12 17:40:59.088483 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 12 17:40:59.088503 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 12 17:40:59.088522 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 12 17:40:59.088540 kernel: vgaarb: loaded Sep 12 17:40:59.088559 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:40:59.088578 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:40:59.088598 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:40:59.088622 kernel: pnp: PnP ACPI init Sep 12 17:40:59.088641 kernel: pnp: PnP ACPI: found 7 devices Sep 12 17:40:59.088661 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:40:59.088681 kernel: NET: Registered PF_INET protocol family Sep 12 17:40:59.088701 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:40:59.088721 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 12 17:40:59.088741 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:40:59.088761 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:40:59.088780 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 12 17:40:59.088803 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 12 17:40:59.088823 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:40:59.088843 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:40:59.088862 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:40:59.088882 kernel: NET: Registered PF_XDP protocol family Sep 12 17:40:59.089106 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:40:59.089307 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:40:59.089481 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:40:59.089655 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 12 17:40:59.089846 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:40:59.089870 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:40:59.089890 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 17:40:59.089909 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 12 17:40:59.089928 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:40:59.089948 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 12 17:40:59.089994 kernel: clocksource: Switched to clocksource tsc Sep 12 17:40:59.090021 kernel: Initialise system trusted keyrings Sep 12 17:40:59.090049 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 12 17:40:59.090068 kernel: Key type asymmetric registered Sep 12 17:40:59.090087 kernel: Asymmetric key parser 'x509' registered Sep 12 17:40:59.090105 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:40:59.090125 kernel: io scheduler mq-deadline registered Sep 12 17:40:59.090144 kernel: io scheduler kyber registered Sep 12 17:40:59.090163 kernel: io scheduler bfq registered Sep 12 17:40:59.090182 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:40:59.090206 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 12 17:40:59.090401 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 12 17:40:59.090426 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 12 17:40:59.090610 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 12 17:40:59.090634 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 12 17:40:59.090815 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 12 17:40:59.090838 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:40:59.090858 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:40:59.090877 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 12 17:40:59.090901 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 12 17:40:59.090920 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 12 17:40:59.091153 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 12 17:40:59.091180 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:40:59.091199 kernel: i8042: Warning: Keylock active Sep 12 17:40:59.091218 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:40:59.091238 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:40:59.091427 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 12 17:40:59.091607 kernel: rtc_cmos 00:00: registered as rtc0 Sep 12 17:40:59.091777 kernel: rtc_cmos 00:00: setting system clock to 2025-09-12T17:40:58 UTC (1757698858) Sep 12 17:40:59.091947 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 12 17:40:59.091993 kernel: intel_pstate: CPU model not supported Sep 12 17:40:59.092012 kernel: pstore: Using crash dump compression: deflate Sep 12 17:40:59.092031 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:40:59.092058 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:40:59.092077 kernel: Segment Routing with IPv6 Sep 12 17:40:59.092101 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:40:59.092121 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:40:59.092139 kernel: Key type dns_resolver registered Sep 12 17:40:59.092158 kernel: IPI shorthand broadcast: enabled Sep 12 17:40:59.092177 kernel: sched_clock: Marking stable (821004317, 130407882)->(976058403, -24646204) Sep 12 17:40:59.092197 kernel: registered taskstats version 1 Sep 12 17:40:59.092216 kernel: Loading compiled-in X.509 certificates Sep 12 17:40:59.092235 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:40:59.092253 kernel: Key type .fscrypt registered Sep 12 17:40:59.092276 kernel: Key type fscrypt-provisioning registered Sep 12 17:40:59.092295 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:40:59.092314 kernel: ima: No architecture policies found Sep 12 17:40:59.092333 kernel: clk: Disabling unused clocks Sep 12 17:40:59.092352 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:40:59.092371 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:40:59.092390 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:40:59.092409 kernel: Run /init as init process Sep 12 17:40:59.092428 kernel: with arguments: Sep 12 17:40:59.092451 kernel: /init Sep 12 17:40:59.092469 kernel: with environment: Sep 12 17:40:59.092488 kernel: HOME=/ Sep 12 17:40:59.092506 kernel: TERM=linux Sep 12 17:40:59.092526 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:40:59.092545 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:40:59.092567 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:40:59.092593 systemd[1]: Detected virtualization google. Sep 12 17:40:59.092614 systemd[1]: Detected architecture x86-64. Sep 12 17:40:59.092633 systemd[1]: Running in initrd. Sep 12 17:40:59.092652 systemd[1]: No hostname configured, using default hostname. Sep 12 17:40:59.092672 systemd[1]: Hostname set to . Sep 12 17:40:59.092693 systemd[1]: Initializing machine ID from random generator. Sep 12 17:40:59.092713 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:40:59.092732 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:40:59.092756 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:40:59.092777 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:40:59.092797 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:40:59.092817 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:40:59.092837 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:40:59.092860 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:40:59.092880 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:40:59.092904 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:40:59.092925 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:40:59.092996 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:40:59.093022 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:40:59.093050 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:40:59.093071 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:40:59.093096 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:40:59.093116 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:40:59.093138 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:40:59.093159 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:40:59.093179 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:40:59.093200 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:40:59.093221 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:40:59.093242 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:40:59.093263 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:40:59.093288 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:40:59.093309 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:40:59.093330 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:40:59.093351 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:40:59.093372 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:40:59.093392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:59.093413 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:40:59.093465 systemd-journald[183]: Collecting audit messages is disabled. Sep 12 17:40:59.093511 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:40:59.093532 systemd-journald[183]: Journal started Sep 12 17:40:59.093576 systemd-journald[183]: Runtime Journal (/run/log/journal/a0c23842c43640d9b26c1314c7a00cf5) is 8.0M, max 148.7M, 140.7M free. Sep 12 17:40:59.096372 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:40:59.099987 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:40:59.102243 systemd-modules-load[184]: Inserted module 'overlay' Sep 12 17:40:59.110182 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:40:59.123102 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:40:59.128350 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:59.134484 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:40:59.144178 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:40:59.161985 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:40:59.164680 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 12 17:40:59.164917 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:40:59.165158 kernel: Bridge firewalling registered Sep 12 17:40:59.166368 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:40:59.167000 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:40:59.172186 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:40:59.185226 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:40:59.195619 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:40:59.203163 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:40:59.211436 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:59.222135 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:40:59.244146 systemd-resolved[215]: Positive Trust Anchors: Sep 12 17:40:59.244603 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:40:59.244815 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:40:59.263096 dracut-cmdline[218]: dracut-dracut-053 Sep 12 17:40:59.263096 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:40:59.251118 systemd-resolved[215]: Defaulting to hostname 'linux'. Sep 12 17:40:59.254205 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:40:59.267156 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:40:59.355008 kernel: SCSI subsystem initialized Sep 12 17:40:59.367004 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:40:59.378984 kernel: iscsi: registered transport (tcp) Sep 12 17:40:59.403210 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:40:59.403273 kernel: QLogic iSCSI HBA Driver Sep 12 17:40:59.454902 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:40:59.459218 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:40:59.500003 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:40:59.500059 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:40:59.501932 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:40:59.546000 kernel: raid6: avx2x4 gen() 18391 MB/s Sep 12 17:40:59.562992 kernel: raid6: avx2x2 gen() 18307 MB/s Sep 12 17:40:59.580377 kernel: raid6: avx2x1 gen() 14315 MB/s Sep 12 17:40:59.580421 kernel: raid6: using algorithm avx2x4 gen() 18391 MB/s Sep 12 17:40:59.598370 kernel: raid6: .... xor() 7970 MB/s, rmw enabled Sep 12 17:40:59.598409 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:40:59.620998 kernel: xor: automatically using best checksumming function avx Sep 12 17:40:59.793011 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:40:59.806348 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:40:59.813233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:40:59.841243 systemd-udevd[400]: Using default interface naming scheme 'v255'. Sep 12 17:40:59.848345 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:40:59.861875 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:40:59.879948 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Sep 12 17:40:59.915634 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:40:59.933150 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:41:00.015416 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:41:00.024201 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:41:00.067057 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:41:00.071979 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:41:00.081113 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:41:00.085067 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:41:00.093179 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:41:00.132801 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:41:00.146239 kernel: scsi host0: Virtio SCSI HBA Sep 12 17:41:00.156563 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 12 17:41:00.169986 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:41:00.203990 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:41:00.204049 kernel: AES CTR mode by8 optimization enabled Sep 12 17:41:00.240723 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:41:00.240940 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:41:00.245556 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:41:00.249039 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:41:00.271224 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 12 17:41:00.271537 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 12 17:41:00.271794 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 12 17:41:00.273239 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 12 17:41:00.273490 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 17:41:00.249250 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:41:00.253547 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:41:00.282364 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:41:00.282399 kernel: GPT:17805311 != 25165823 Sep 12 17:41:00.282423 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:41:00.265302 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:41:00.291074 kernel: GPT:17805311 != 25165823 Sep 12 17:41:00.291108 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:41:00.291135 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:41:00.291161 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 12 17:41:00.314977 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:41:00.324266 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:41:00.349981 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (450) Sep 12 17:41:00.357006 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (465) Sep 12 17:41:00.382865 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:41:00.397138 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 12 17:41:00.404779 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 12 17:41:00.414741 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 12 17:41:00.414940 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 12 17:41:00.436929 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 12 17:41:00.445172 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:41:00.458649 disk-uuid[549]: Primary Header is updated. Sep 12 17:41:00.458649 disk-uuid[549]: Secondary Entries is updated. Sep 12 17:41:00.458649 disk-uuid[549]: Secondary Header is updated. Sep 12 17:41:00.473020 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:41:00.495000 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:41:00.502000 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:41:01.503005 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:41:01.504148 disk-uuid[550]: The operation has completed successfully. Sep 12 17:41:01.579724 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:41:01.579890 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:41:01.619195 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:41:01.650080 sh[567]: Success Sep 12 17:41:01.673007 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:41:01.752788 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:41:01.759745 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:41:01.780357 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:41:01.815999 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:41:01.816071 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:41:01.832703 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:41:01.832744 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:41:01.839521 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:41:01.869992 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:41:01.875855 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:41:01.876759 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:41:01.882155 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:41:01.894155 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:41:01.957993 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:41:01.958059 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:41:01.965065 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:41:01.981668 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:41:01.981726 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:41:01.999317 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:41:02.017133 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:41:02.025736 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:41:02.044232 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:41:02.128242 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:41:02.133194 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:41:02.251277 systemd-networkd[750]: lo: Link UP Sep 12 17:41:02.251290 systemd-networkd[750]: lo: Gained carrier Sep 12 17:41:02.253934 systemd-networkd[750]: Enumeration completed Sep 12 17:41:02.259546 ignition[678]: Ignition 2.19.0 Sep 12 17:41:02.254651 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:41:02.259561 ignition[678]: Stage: fetch-offline Sep 12 17:41:02.254658 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:41:02.259653 ignition[678]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:02.256246 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:41:02.259671 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:41:02.257251 systemd-networkd[750]: eth0: Link UP Sep 12 17:41:02.259848 ignition[678]: parsed url from cmdline: "" Sep 12 17:41:02.257258 systemd-networkd[750]: eth0: Gained carrier Sep 12 17:41:02.259855 ignition[678]: no config URL provided Sep 12 17:41:02.257270 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:41:02.259866 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:41:02.259715 systemd[1]: Reached target network.target - Network. Sep 12 17:41:02.259877 ignition[678]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:41:02.266041 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.94/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 12 17:41:02.259886 ignition[678]: failed to fetch config: resource requires networking Sep 12 17:41:02.275509 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:41:02.260179 ignition[678]: Ignition finished successfully Sep 12 17:41:02.307171 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:41:02.347940 ignition[760]: Ignition 2.19.0 Sep 12 17:41:02.357610 unknown[760]: fetched base config from "system" Sep 12 17:41:02.347954 ignition[760]: Stage: fetch Sep 12 17:41:02.357623 unknown[760]: fetched base config from "system" Sep 12 17:41:02.348241 ignition[760]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:02.357634 unknown[760]: fetched user config from "gcp" Sep 12 17:41:02.348257 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:41:02.370497 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:41:02.348395 ignition[760]: parsed url from cmdline: "" Sep 12 17:41:02.396248 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:41:02.348404 ignition[760]: no config URL provided Sep 12 17:41:02.442382 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:41:02.348412 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:41:02.464203 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:41:02.348425 ignition[760]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:41:02.503330 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:41:02.348454 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 12 17:41:02.508426 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:41:02.351175 ignition[760]: GET result: OK Sep 12 17:41:02.537194 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:41:02.351321 ignition[760]: parsing config with SHA512: 5a05a0219e8690b23858c43c81b231cd5e4d4a37a64d5c97d5ca21f0af4671cbe78735eeee3d92723c1ee2030b01a789c677c76473cfe4399483f5f969373474 Sep 12 17:41:02.544247 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:41:02.358418 ignition[760]: fetch: fetch complete Sep 12 17:41:02.572084 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:41:02.358425 ignition[760]: fetch: fetch passed Sep 12 17:41:02.586115 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:41:02.358483 ignition[760]: Ignition finished successfully Sep 12 17:41:02.607161 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:41:02.440004 ignition[767]: Ignition 2.19.0 Sep 12 17:41:02.440016 ignition[767]: Stage: kargs Sep 12 17:41:02.440261 ignition[767]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:02.440274 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:41:02.441223 ignition[767]: kargs: kargs passed Sep 12 17:41:02.441280 ignition[767]: Ignition finished successfully Sep 12 17:41:02.500854 ignition[772]: Ignition 2.19.0 Sep 12 17:41:02.500864 ignition[772]: Stage: disks Sep 12 17:41:02.501247 ignition[772]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:02.501262 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:41:02.502222 ignition[772]: disks: disks passed Sep 12 17:41:02.502282 ignition[772]: Ignition finished successfully Sep 12 17:41:02.673430 systemd-fsck[782]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 12 17:41:02.852022 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:41:02.857131 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:41:03.006044 kernel: EXT4-fs (sda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:41:03.007122 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:41:03.016730 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:41:03.041079 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:41:03.061384 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:41:03.070619 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:41:03.070701 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:41:03.169151 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (790) Sep 12 17:41:03.169200 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:41:03.169227 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:41:03.169251 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:41:03.169283 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:41:03.169299 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:41:03.070732 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:41:03.140321 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:41:03.150151 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:41:03.186189 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:41:03.290768 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:41:03.301118 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:41:03.311200 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:41:03.322076 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:41:03.443622 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:41:03.449101 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:41:03.484991 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:41:03.494173 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:41:03.503458 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:41:03.535137 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:41:03.543105 ignition[902]: INFO : Ignition 2.19.0 Sep 12 17:41:03.543105 ignition[902]: INFO : Stage: mount Sep 12 17:41:03.543105 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:03.543105 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:41:03.543105 ignition[902]: INFO : mount: mount passed Sep 12 17:41:03.543105 ignition[902]: INFO : Ignition finished successfully Sep 12 17:41:03.554447 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:41:03.567130 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:41:04.013201 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:41:04.027105 systemd-networkd[750]: eth0: Gained IPv6LL Sep 12 17:41:04.061088 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (914) Sep 12 17:41:04.078326 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:41:04.078385 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:41:04.078412 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:41:04.099664 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:41:04.099727 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:41:04.102760 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:41:04.141682 ignition[931]: INFO : Ignition 2.19.0 Sep 12 17:41:04.141682 ignition[931]: INFO : Stage: files Sep 12 17:41:04.156080 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:04.156080 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:41:04.156080 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:41:04.156080 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:41:04.156080 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:41:04.156080 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:41:04.156080 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:41:04.156080 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:41:04.156080 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:41:04.156080 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 17:41:04.149362 unknown[931]: wrote ssh authorized keys file for user: core Sep 12 17:41:04.409790 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:41:05.065209 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 17:41:05.481397 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:41:06.182201 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:41:06.182201 ignition[931]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:41:06.219134 ignition[931]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:41:06.219134 ignition[931]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:41:06.219134 ignition[931]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:41:06.219134 ignition[931]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:41:06.219134 ignition[931]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:41:06.219134 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:41:06.219134 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:41:06.219134 ignition[931]: INFO : files: files passed Sep 12 17:41:06.219134 ignition[931]: INFO : Ignition finished successfully Sep 12 17:41:06.186871 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:41:06.207261 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:41:06.251219 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:41:06.264519 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:41:06.432089 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:41:06.432089 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:41:06.264642 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:41:06.470236 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:41:06.336665 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:41:06.348342 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:41:06.369151 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:41:06.467483 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:41:06.467602 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:41:06.481258 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:41:06.505074 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:41:06.525154 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:41:06.530151 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:41:06.581891 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:41:06.606176 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:41:06.644268 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:41:06.659349 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:41:06.670449 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:41:06.690353 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:41:06.690524 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:41:06.726359 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:41:06.751415 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:41:06.762382 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:41:06.789213 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:41:06.808265 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:41:06.829273 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:41:06.849220 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:41:06.870263 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:41:06.890331 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:41:06.911223 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:41:06.929231 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:41:06.929433 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:41:06.956342 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:41:06.974224 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:41:06.995257 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:41:06.995441 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:41:07.016218 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:41:07.016473 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:41:07.045341 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:41:07.045562 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:41:07.069312 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:41:07.069478 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:41:07.145186 ignition[983]: INFO : Ignition 2.19.0 Sep 12 17:41:07.145186 ignition[983]: INFO : Stage: umount Sep 12 17:41:07.145186 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:07.145186 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:41:07.145186 ignition[983]: INFO : umount: umount passed Sep 12 17:41:07.145186 ignition[983]: INFO : Ignition finished successfully Sep 12 17:41:07.093201 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:41:07.109210 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:41:07.145364 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:41:07.145670 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:41:07.197314 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:41:07.197486 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:41:07.224708 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:41:07.225739 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:41:07.225849 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:41:07.241616 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:41:07.241720 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:41:07.261519 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:41:07.261656 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:41:07.272271 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:41:07.272323 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:41:07.290357 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:41:07.290423 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:41:07.307309 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:41:07.307362 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:41:07.324295 systemd[1]: Stopped target network.target - Network. Sep 12 17:41:07.339272 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:41:07.339343 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:41:07.354336 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:41:07.372223 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:41:07.376046 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:41:07.398258 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:41:07.407266 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:41:07.422263 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:41:07.422317 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:41:07.440286 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:41:07.440339 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:41:07.457277 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:41:07.457336 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:41:07.474336 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:41:07.474401 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:41:07.491270 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:41:07.491323 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:41:07.508500 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:41:07.513050 systemd-networkd[750]: eth0: DHCPv6 lease lost Sep 12 17:41:07.536301 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:41:07.544671 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:41:07.544796 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:41:07.559585 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:41:07.559702 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:41:07.580401 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:41:07.580452 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:41:07.600097 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:41:07.620036 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:41:07.620122 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:41:07.640149 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:41:07.640249 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:41:07.650147 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:41:07.650261 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:41:07.673311 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:41:07.673489 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:41:08.113076 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 12 17:41:07.691311 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:41:07.714392 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:41:07.714613 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:41:07.726578 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:41:07.726684 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:41:07.745697 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:41:07.745772 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:41:07.771229 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:41:07.771280 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:41:07.779273 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:41:07.779332 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:41:07.830082 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:41:07.830298 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:41:07.858224 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:41:07.858419 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:41:07.908156 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:41:07.920043 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:41:07.920127 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:41:07.931132 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:41:07.931222 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:41:07.942237 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:41:07.942293 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:41:07.953262 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:41:07.953314 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:41:07.989661 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:41:07.989768 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:41:08.000351 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:41:08.023169 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:41:08.069328 systemd[1]: Switching root. Sep 12 17:41:08.401076 systemd-journald[183]: Journal stopped Sep 12 17:40:59.079665 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:40:59.079707 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:40:59.079726 kernel: BIOS-provided physical RAM map: Sep 12 17:40:59.079740 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 12 17:40:59.079754 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 12 17:40:59.079768 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 12 17:40:59.079784 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 12 17:40:59.079802 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 12 17:40:59.079817 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Sep 12 17:40:59.079832 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Sep 12 17:40:59.079847 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Sep 12 17:40:59.079863 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Sep 12 17:40:59.079878 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 12 17:40:59.079910 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 12 17:40:59.079937 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 12 17:40:59.079953 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 12 17:40:59.079994 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 12 17:40:59.080010 kernel: NX (Execute Disable) protection: active Sep 12 17:40:59.080027 kernel: APIC: Static calls initialized Sep 12 17:40:59.080051 kernel: efi: EFI v2.7 by EDK II Sep 12 17:40:59.080067 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Sep 12 17:40:59.080084 kernel: SMBIOS 2.4 present. Sep 12 17:40:59.080101 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 12 17:40:59.080117 kernel: Hypervisor detected: KVM Sep 12 17:40:59.080138 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:40:59.080154 kernel: kvm-clock: using sched offset of 12407933091 cycles Sep 12 17:40:59.080170 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:40:59.080187 kernel: tsc: Detected 2299.998 MHz processor Sep 12 17:40:59.080204 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:40:59.080220 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:40:59.080237 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 12 17:40:59.080255 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 12 17:40:59.080272 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:40:59.080293 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 12 17:40:59.080310 kernel: Using GB pages for direct mapping Sep 12 17:40:59.080327 kernel: Secure boot disabled Sep 12 17:40:59.080344 kernel: ACPI: Early table checksum verification disabled Sep 12 17:40:59.080361 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 12 17:40:59.080378 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 12 17:40:59.080395 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 12 17:40:59.080419 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 12 17:40:59.080440 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 12 17:40:59.080458 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 12 17:40:59.080477 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 12 17:40:59.080495 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 12 17:40:59.080514 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 12 17:40:59.080532 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 12 17:40:59.080554 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 12 17:40:59.080572 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 12 17:40:59.080590 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 12 17:40:59.080608 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 12 17:40:59.080626 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 12 17:40:59.080644 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 12 17:40:59.080662 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 12 17:40:59.080681 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 12 17:40:59.080698 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 12 17:40:59.080720 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 12 17:40:59.080738 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:40:59.080757 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 17:40:59.080775 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 17:40:59.080793 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 12 17:40:59.080811 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 12 17:40:59.080831 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Sep 12 17:40:59.080849 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Sep 12 17:40:59.080866 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Sep 12 17:40:59.080889 kernel: Zone ranges: Sep 12 17:40:59.080907 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:40:59.080925 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 12 17:40:59.080944 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 12 17:40:59.080974 kernel: Movable zone start for each node Sep 12 17:40:59.081004 kernel: Early memory node ranges Sep 12 17:40:59.081022 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 12 17:40:59.081047 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 12 17:40:59.081066 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Sep 12 17:40:59.081089 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 12 17:40:59.081107 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 12 17:40:59.081125 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 12 17:40:59.081142 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:40:59.081160 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 12 17:40:59.081178 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 12 17:40:59.081198 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 17:40:59.081216 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 12 17:40:59.081234 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 12 17:40:59.081253 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:40:59.081275 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:40:59.081292 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:40:59.081308 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:40:59.081326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:40:59.081343 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:40:59.081361 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:40:59.081379 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:40:59.081396 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 12 17:40:59.081413 kernel: Booting paravirtualized kernel on KVM Sep 12 17:40:59.081435 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:40:59.081453 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:40:59.081471 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:40:59.081489 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:40:59.081506 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:40:59.081520 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:40:59.081539 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:40:59.081559 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:40:59.081582 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:40:59.081600 kernel: random: crng init done Sep 12 17:40:59.081618 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 12 17:40:59.081637 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:40:59.081655 kernel: Fallback order for Node 0: 0 Sep 12 17:40:59.081672 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Sep 12 17:40:59.081689 kernel: Policy zone: Normal Sep 12 17:40:59.081708 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:40:59.081727 kernel: software IO TLB: area num 2. Sep 12 17:40:59.081749 kernel: Memory: 7513400K/7860584K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 346924K reserved, 0K cma-reserved) Sep 12 17:40:59.081768 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:40:59.081786 kernel: Kernel/User page tables isolation: enabled Sep 12 17:40:59.081805 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:40:59.081822 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:40:59.081840 kernel: Dynamic Preempt: voluntary Sep 12 17:40:59.081859 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:40:59.081879 kernel: rcu: RCU event tracing is enabled. Sep 12 17:40:59.081916 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:40:59.081935 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:40:59.081955 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:40:59.081993 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:40:59.082009 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:40:59.082025 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:40:59.082048 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:40:59.082065 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:40:59.082081 kernel: Console: colour dummy device 80x25 Sep 12 17:40:59.082102 kernel: printk: console [ttyS0] enabled Sep 12 17:40:59.082120 kernel: ACPI: Core revision 20230628 Sep 12 17:40:59.082138 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:40:59.082156 kernel: x2apic enabled Sep 12 17:40:59.082175 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:40:59.082194 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 12 17:40:59.082211 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 12 17:40:59.082230 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 12 17:40:59.082260 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 12 17:40:59.082279 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 12 17:40:59.082296 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:40:59.082334 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 12 17:40:59.082353 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 12 17:40:59.082372 kernel: Spectre V2 : Mitigation: IBRS Sep 12 17:40:59.082391 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:40:59.082410 kernel: RETBleed: Mitigation: IBRS Sep 12 17:40:59.082429 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:40:59.082452 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 12 17:40:59.082471 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:40:59.082489 kernel: MDS: Mitigation: Clear CPU buffers Sep 12 17:40:59.082508 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:40:59.082526 kernel: active return thunk: its_return_thunk Sep 12 17:40:59.082545 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:40:59.082564 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:40:59.082583 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:40:59.082602 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:40:59.082624 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:40:59.082643 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 12 17:40:59.082662 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:40:59.082680 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:40:59.082699 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:40:59.082717 kernel: landlock: Up and running. Sep 12 17:40:59.082736 kernel: SELinux: Initializing. Sep 12 17:40:59.082755 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:40:59.082773 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:40:59.082795 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 12 17:40:59.082814 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:40:59.082832 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:40:59.082850 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:40:59.082868 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 12 17:40:59.082887 kernel: signal: max sigframe size: 1776 Sep 12 17:40:59.082905 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:40:59.082925 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:40:59.082943 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:40:59.082981 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:40:59.083000 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:40:59.083018 kernel: .... node #0, CPUs: #1 Sep 12 17:40:59.083046 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 12 17:40:59.083065 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:40:59.083084 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:40:59.083103 kernel: smpboot: Max logical packages: 1 Sep 12 17:40:59.083122 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 12 17:40:59.083144 kernel: devtmpfs: initialized Sep 12 17:40:59.083162 kernel: x86/mm: Memory block size: 128MB Sep 12 17:40:59.083181 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 12 17:40:59.083199 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:40:59.083218 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:40:59.083236 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:40:59.083254 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:40:59.083273 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:40:59.083291 kernel: audit: type=2000 audit(1757698858.133:1): state=initialized audit_enabled=0 res=1 Sep 12 17:40:59.083313 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:40:59.083332 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:40:59.083350 kernel: cpuidle: using governor menu Sep 12 17:40:59.083368 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:40:59.083387 kernel: dca service started, version 1.12.1 Sep 12 17:40:59.083405 kernel: PCI: Using configuration type 1 for base access Sep 12 17:40:59.083423 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:40:59.083442 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:40:59.083460 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:40:59.083482 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:40:59.083509 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:40:59.083528 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:40:59.083547 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:40:59.083564 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:40:59.083583 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 12 17:40:59.083601 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:40:59.083621 kernel: ACPI: Interpreter enabled Sep 12 17:40:59.083638 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 17:40:59.083662 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:40:59.083682 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:40:59.083701 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 12 17:40:59.083720 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 12 17:40:59.083739 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:40:59.084053 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:40:59.084276 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:40:59.084470 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:40:59.084500 kernel: PCI host bridge to bus 0000:00 Sep 12 17:40:59.084692 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:40:59.084867 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:40:59.085079 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:40:59.085254 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 12 17:40:59.085425 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:40:59.085632 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 17:40:59.085847 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Sep 12 17:40:59.086076 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 12 17:40:59.086287 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 12 17:40:59.086497 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Sep 12 17:40:59.086697 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Sep 12 17:40:59.086894 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Sep 12 17:40:59.087184 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:40:59.087391 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Sep 12 17:40:59.087586 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Sep 12 17:40:59.087789 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:40:59.088026 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 12 17:40:59.088233 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Sep 12 17:40:59.088264 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:40:59.088285 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:40:59.088303 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:40:59.088322 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:40:59.088343 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:40:59.088363 kernel: iommu: Default domain type: Translated Sep 12 17:40:59.088383 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:40:59.088403 kernel: efivars: Registered efivars operations Sep 12 17:40:59.088421 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:40:59.088441 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:40:59.088464 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 12 17:40:59.088483 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 12 17:40:59.088503 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 12 17:40:59.088522 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 12 17:40:59.088540 kernel: vgaarb: loaded Sep 12 17:40:59.088559 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:40:59.088578 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:40:59.088598 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:40:59.088622 kernel: pnp: PnP ACPI init Sep 12 17:40:59.088641 kernel: pnp: PnP ACPI: found 7 devices Sep 12 17:40:59.088661 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:40:59.088681 kernel: NET: Registered PF_INET protocol family Sep 12 17:40:59.088701 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:40:59.088721 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 12 17:40:59.088741 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:40:59.088761 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:40:59.088780 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 12 17:40:59.088803 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 12 17:40:59.088823 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:40:59.088843 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:40:59.088862 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:40:59.088882 kernel: NET: Registered PF_XDP protocol family Sep 12 17:40:59.089106 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:40:59.089307 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:40:59.089481 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:40:59.089655 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 12 17:40:59.089846 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:40:59.089870 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:40:59.089890 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 17:40:59.089909 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 12 17:40:59.089928 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:40:59.089948 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 12 17:40:59.089994 kernel: clocksource: Switched to clocksource tsc Sep 12 17:40:59.090021 kernel: Initialise system trusted keyrings Sep 12 17:40:59.090049 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 12 17:40:59.090068 kernel: Key type asymmetric registered Sep 12 17:40:59.090087 kernel: Asymmetric key parser 'x509' registered Sep 12 17:40:59.090105 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:40:59.090125 kernel: io scheduler mq-deadline registered Sep 12 17:40:59.090144 kernel: io scheduler kyber registered Sep 12 17:40:59.090163 kernel: io scheduler bfq registered Sep 12 17:40:59.090182 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:40:59.090206 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 12 17:40:59.090401 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 12 17:40:59.090426 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 12 17:40:59.090610 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 12 17:40:59.090634 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 12 17:40:59.090815 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 12 17:40:59.090838 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:40:59.090858 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:40:59.090877 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 12 17:40:59.090901 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 12 17:40:59.090920 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 12 17:40:59.091153 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 12 17:40:59.091180 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:40:59.091199 kernel: i8042: Warning: Keylock active Sep 12 17:40:59.091218 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:40:59.091238 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:40:59.091427 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 12 17:40:59.091607 kernel: rtc_cmos 00:00: registered as rtc0 Sep 12 17:40:59.091777 kernel: rtc_cmos 00:00: setting system clock to 2025-09-12T17:40:58 UTC (1757698858) Sep 12 17:40:59.091947 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 12 17:40:59.091993 kernel: intel_pstate: CPU model not supported Sep 12 17:40:59.092012 kernel: pstore: Using crash dump compression: deflate Sep 12 17:40:59.092031 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:40:59.092058 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:40:59.092077 kernel: Segment Routing with IPv6 Sep 12 17:40:59.092101 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:40:59.092121 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:40:59.092139 kernel: Key type dns_resolver registered Sep 12 17:40:59.092158 kernel: IPI shorthand broadcast: enabled Sep 12 17:40:59.092177 kernel: sched_clock: Marking stable (821004317, 130407882)->(976058403, -24646204) Sep 12 17:40:59.092197 kernel: registered taskstats version 1 Sep 12 17:40:59.092216 kernel: Loading compiled-in X.509 certificates Sep 12 17:40:59.092235 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:40:59.092253 kernel: Key type .fscrypt registered Sep 12 17:40:59.092276 kernel: Key type fscrypt-provisioning registered Sep 12 17:40:59.092295 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:40:59.092314 kernel: ima: No architecture policies found Sep 12 17:40:59.092333 kernel: clk: Disabling unused clocks Sep 12 17:40:59.092352 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:40:59.092371 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:40:59.092390 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:40:59.092409 kernel: Run /init as init process Sep 12 17:40:59.092428 kernel: with arguments: Sep 12 17:40:59.092451 kernel: /init Sep 12 17:40:59.092469 kernel: with environment: Sep 12 17:40:59.092488 kernel: HOME=/ Sep 12 17:40:59.092506 kernel: TERM=linux Sep 12 17:40:59.092526 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:40:59.092545 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:40:59.092567 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:40:59.092593 systemd[1]: Detected virtualization google. Sep 12 17:40:59.092614 systemd[1]: Detected architecture x86-64. Sep 12 17:40:59.092633 systemd[1]: Running in initrd. Sep 12 17:40:59.092652 systemd[1]: No hostname configured, using default hostname. Sep 12 17:40:59.092672 systemd[1]: Hostname set to . Sep 12 17:40:59.092693 systemd[1]: Initializing machine ID from random generator. Sep 12 17:40:59.092713 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:40:59.092732 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:40:59.092756 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:40:59.092777 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:40:59.092797 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:40:59.092817 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:40:59.092837 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:40:59.092860 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:40:59.092880 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:40:59.092904 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:40:59.092925 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:40:59.092996 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:40:59.093022 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:40:59.093050 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:40:59.093071 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:40:59.093096 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:40:59.093116 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:40:59.093138 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:40:59.093159 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:40:59.093179 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:40:59.093200 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:40:59.093221 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:40:59.093242 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:40:59.093263 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:40:59.093288 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:40:59.093309 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:40:59.093330 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:40:59.093351 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:40:59.093372 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:40:59.093392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:59.093413 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:40:59.093465 systemd-journald[183]: Collecting audit messages is disabled. Sep 12 17:40:59.093511 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:40:59.093532 systemd-journald[183]: Journal started Sep 12 17:40:59.093576 systemd-journald[183]: Runtime Journal (/run/log/journal/a0c23842c43640d9b26c1314c7a00cf5) is 8.0M, max 148.7M, 140.7M free. Sep 12 17:40:59.096372 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:40:59.099987 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:40:59.102243 systemd-modules-load[184]: Inserted module 'overlay' Sep 12 17:40:59.110182 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:40:59.123102 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:40:59.128350 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:59.134484 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:40:59.144178 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:40:59.161985 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:40:59.164680 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 12 17:40:59.164917 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:40:59.165158 kernel: Bridge firewalling registered Sep 12 17:40:59.166368 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:40:59.167000 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:40:59.172186 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:40:59.185226 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:40:59.195619 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:40:59.203163 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:40:59.211436 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:59.222135 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:40:59.244146 systemd-resolved[215]: Positive Trust Anchors: Sep 12 17:40:59.244603 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:40:59.244815 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:40:59.263096 dracut-cmdline[218]: dracut-dracut-053 Sep 12 17:40:59.263096 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:40:59.251118 systemd-resolved[215]: Defaulting to hostname 'linux'. Sep 12 17:40:59.254205 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:40:59.267156 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:40:59.355008 kernel: SCSI subsystem initialized Sep 12 17:40:59.367004 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:40:59.378984 kernel: iscsi: registered transport (tcp) Sep 12 17:40:59.403210 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:40:59.403273 kernel: QLogic iSCSI HBA Driver Sep 12 17:40:59.454902 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:40:59.459218 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:40:59.500003 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:40:59.500059 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:40:59.501932 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:40:59.546000 kernel: raid6: avx2x4 gen() 18391 MB/s Sep 12 17:40:59.562992 kernel: raid6: avx2x2 gen() 18307 MB/s Sep 12 17:40:59.580377 kernel: raid6: avx2x1 gen() 14315 MB/s Sep 12 17:40:59.580421 kernel: raid6: using algorithm avx2x4 gen() 18391 MB/s Sep 12 17:40:59.598370 kernel: raid6: .... xor() 7970 MB/s, rmw enabled Sep 12 17:40:59.598409 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:40:59.620998 kernel: xor: automatically using best checksumming function avx Sep 12 17:40:59.793011 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:40:59.806348 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:40:59.813233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:40:59.841243 systemd-udevd[400]: Using default interface naming scheme 'v255'. Sep 12 17:40:59.848345 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:40:59.861875 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:40:59.879948 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Sep 12 17:40:59.915634 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:40:59.933150 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:41:00.015416 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:41:00.024201 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:41:00.067057 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:41:00.071979 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:41:00.081113 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:41:00.085067 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:41:00.093179 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:41:00.132801 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:41:00.146239 kernel: scsi host0: Virtio SCSI HBA Sep 12 17:41:00.156563 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 12 17:41:00.169986 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:41:00.203990 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:41:00.204049 kernel: AES CTR mode by8 optimization enabled Sep 12 17:41:00.240723 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:41:00.240940 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:41:00.245556 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:41:00.249039 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:41:00.271224 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 12 17:41:00.271537 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 12 17:41:00.271794 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 12 17:41:00.273239 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 12 17:41:00.273490 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 17:41:00.249250 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:41:00.253547 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:41:00.282364 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:41:00.282399 kernel: GPT:17805311 != 25165823 Sep 12 17:41:00.282423 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:41:00.265302 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:41:00.291074 kernel: GPT:17805311 != 25165823 Sep 12 17:41:00.291108 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:41:00.291135 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:41:00.291161 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 12 17:41:00.314977 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:41:00.324266 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:41:00.349981 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (450) Sep 12 17:41:00.357006 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (465) Sep 12 17:41:00.382865 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:41:00.397138 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 12 17:41:00.404779 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 12 17:41:00.414741 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 12 17:41:00.414940 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 12 17:41:00.436929 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 12 17:41:00.445172 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:41:00.458649 disk-uuid[549]: Primary Header is updated. Sep 12 17:41:00.458649 disk-uuid[549]: Secondary Entries is updated. Sep 12 17:41:00.458649 disk-uuid[549]: Secondary Header is updated. Sep 12 17:41:00.473020 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:41:00.495000 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:41:00.502000 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:41:01.503005 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:41:01.504148 disk-uuid[550]: The operation has completed successfully. Sep 12 17:41:01.579724 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:41:01.579890 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:41:01.619195 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:41:01.650080 sh[567]: Success Sep 12 17:41:01.673007 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:41:01.752788 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:41:01.759745 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:41:01.780357 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:41:01.815999 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:41:01.816071 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:41:01.832703 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:41:01.832744 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:41:01.839521 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:41:01.869992 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:41:01.875855 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:41:01.876759 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:41:01.882155 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:41:01.894155 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:41:01.957993 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:41:01.958059 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:41:01.965065 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:41:01.981668 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:41:01.981726 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:41:01.999317 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:41:02.017133 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:41:02.025736 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:41:02.044232 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:41:02.128242 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:41:02.133194 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:41:02.251277 systemd-networkd[750]: lo: Link UP Sep 12 17:41:02.251290 systemd-networkd[750]: lo: Gained carrier Sep 12 17:41:02.253934 systemd-networkd[750]: Enumeration completed Sep 12 17:41:02.259546 ignition[678]: Ignition 2.19.0 Sep 12 17:41:02.254651 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:41:02.259561 ignition[678]: Stage: fetch-offline Sep 12 17:41:02.254658 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:41:02.259653 ignition[678]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:02.256246 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:41:02.259671 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:41:02.257251 systemd-networkd[750]: eth0: Link UP Sep 12 17:41:02.259848 ignition[678]: parsed url from cmdline: "" Sep 12 17:41:02.257258 systemd-networkd[750]: eth0: Gained carrier Sep 12 17:41:02.259855 ignition[678]: no config URL provided Sep 12 17:41:02.257270 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:41:02.259866 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:41:02.259715 systemd[1]: Reached target network.target - Network. Sep 12 17:41:02.259877 ignition[678]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:41:02.266041 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.94/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 12 17:41:02.259886 ignition[678]: failed to fetch config: resource requires networking Sep 12 17:41:02.275509 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:41:02.260179 ignition[678]: Ignition finished successfully Sep 12 17:41:02.307171 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:41:02.347940 ignition[760]: Ignition 2.19.0 Sep 12 17:41:02.357610 unknown[760]: fetched base config from "system" Sep 12 17:41:02.347954 ignition[760]: Stage: fetch Sep 12 17:41:02.357623 unknown[760]: fetched base config from "system" Sep 12 17:41:02.348241 ignition[760]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:02.357634 unknown[760]: fetched user config from "gcp" Sep 12 17:41:02.348257 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:41:02.370497 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:41:02.348395 ignition[760]: parsed url from cmdline: "" Sep 12 17:41:02.396248 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:41:02.348404 ignition[760]: no config URL provided Sep 12 17:41:02.442382 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:41:02.348412 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:41:02.464203 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:41:02.348425 ignition[760]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:41:02.503330 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:41:02.348454 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 12 17:41:02.508426 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:41:02.351175 ignition[760]: GET result: OK Sep 12 17:41:02.537194 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:41:02.351321 ignition[760]: parsing config with SHA512: 5a05a0219e8690b23858c43c81b231cd5e4d4a37a64d5c97d5ca21f0af4671cbe78735eeee3d92723c1ee2030b01a789c677c76473cfe4399483f5f969373474 Sep 12 17:41:02.544247 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:41:02.358418 ignition[760]: fetch: fetch complete Sep 12 17:41:02.572084 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:41:02.358425 ignition[760]: fetch: fetch passed Sep 12 17:41:02.586115 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:41:02.358483 ignition[760]: Ignition finished successfully Sep 12 17:41:02.607161 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:41:02.440004 ignition[767]: Ignition 2.19.0 Sep 12 17:41:02.440016 ignition[767]: Stage: kargs Sep 12 17:41:02.440261 ignition[767]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:02.440274 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:41:02.441223 ignition[767]: kargs: kargs passed Sep 12 17:41:02.441280 ignition[767]: Ignition finished successfully Sep 12 17:41:02.500854 ignition[772]: Ignition 2.19.0 Sep 12 17:41:02.500864 ignition[772]: Stage: disks Sep 12 17:41:02.501247 ignition[772]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:02.501262 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:41:02.502222 ignition[772]: disks: disks passed Sep 12 17:41:02.502282 ignition[772]: Ignition finished successfully Sep 12 17:41:02.673430 systemd-fsck[782]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 12 17:41:02.852022 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:41:02.857131 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:41:03.006044 kernel: EXT4-fs (sda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:41:03.007122 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:41:03.016730 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:41:03.041079 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:41:03.061384 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:41:03.070619 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:41:03.070701 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:41:03.169151 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (790) Sep 12 17:41:03.169200 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:41:03.169227 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:41:03.169251 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:41:03.169283 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:41:03.169299 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:41:03.070732 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:41:03.140321 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:41:03.150151 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:41:03.186189 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:41:03.290768 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:41:03.301118 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:41:03.311200 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:41:03.322076 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:41:03.443622 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:41:03.449101 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:41:03.484991 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:41:03.494173 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:41:03.503458 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:41:03.535137 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:41:03.543105 ignition[902]: INFO : Ignition 2.19.0 Sep 12 17:41:03.543105 ignition[902]: INFO : Stage: mount Sep 12 17:41:03.543105 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:03.543105 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:41:03.543105 ignition[902]: INFO : mount: mount passed Sep 12 17:41:03.543105 ignition[902]: INFO : Ignition finished successfully Sep 12 17:41:03.554447 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:41:03.567130 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:41:04.013201 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:41:04.027105 systemd-networkd[750]: eth0: Gained IPv6LL Sep 12 17:41:04.061088 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (914) Sep 12 17:41:04.078326 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:41:04.078385 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:41:04.078412 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:41:04.099664 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:41:04.099727 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:41:04.102760 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:41:04.141682 ignition[931]: INFO : Ignition 2.19.0 Sep 12 17:41:04.141682 ignition[931]: INFO : Stage: files Sep 12 17:41:04.156080 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:04.156080 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:41:04.156080 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:41:04.156080 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:41:04.156080 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:41:04.156080 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:41:04.156080 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:41:04.156080 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:41:04.156080 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:41:04.156080 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 17:41:04.149362 unknown[931]: wrote ssh authorized keys file for user: core Sep 12 17:41:04.409790 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:41:05.065209 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:41:05.082091 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 17:41:05.481397 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:41:06.182201 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:41:06.182201 ignition[931]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:41:06.219134 ignition[931]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:41:06.219134 ignition[931]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:41:06.219134 ignition[931]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:41:06.219134 ignition[931]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:41:06.219134 ignition[931]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:41:06.219134 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:41:06.219134 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:41:06.219134 ignition[931]: INFO : files: files passed Sep 12 17:41:06.219134 ignition[931]: INFO : Ignition finished successfully Sep 12 17:41:06.186871 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:41:06.207261 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:41:06.251219 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:41:06.264519 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:41:06.432089 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:41:06.432089 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:41:06.264642 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:41:06.470236 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:41:06.336665 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:41:06.348342 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:41:06.369151 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:41:06.467483 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:41:06.467602 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:41:06.481258 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:41:06.505074 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:41:06.525154 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:41:06.530151 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:41:06.581891 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:41:06.606176 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:41:06.644268 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:41:06.659349 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:41:06.670449 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:41:06.690353 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:41:06.690524 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:41:06.726359 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:41:06.751415 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:41:06.762382 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:41:06.789213 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:41:06.808265 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:41:06.829273 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:41:06.849220 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:41:06.870263 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:41:06.890331 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:41:06.911223 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:41:06.929231 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:41:06.929433 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:41:06.956342 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:41:06.974224 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:41:06.995257 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:41:06.995441 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:41:07.016218 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:41:07.016473 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:41:07.045341 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:41:07.045562 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:41:07.069312 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:41:07.069478 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:41:07.145186 ignition[983]: INFO : Ignition 2.19.0 Sep 12 17:41:07.145186 ignition[983]: INFO : Stage: umount Sep 12 17:41:07.145186 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:41:07.145186 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:41:07.145186 ignition[983]: INFO : umount: umount passed Sep 12 17:41:07.145186 ignition[983]: INFO : Ignition finished successfully Sep 12 17:41:07.093201 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:41:07.109210 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:41:07.145364 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:41:07.145670 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:41:07.197314 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:41:07.197486 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:41:07.224708 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:41:07.225739 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:41:07.225849 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:41:07.241616 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:41:07.241720 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:41:07.261519 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:41:07.261656 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:41:07.272271 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:41:07.272323 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:41:07.290357 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:41:07.290423 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:41:07.307309 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:41:07.307362 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:41:07.324295 systemd[1]: Stopped target network.target - Network. Sep 12 17:41:07.339272 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:41:07.339343 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:41:07.354336 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:41:07.372223 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:41:07.376046 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:41:07.398258 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:41:07.407266 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:41:07.422263 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:41:07.422317 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:41:07.440286 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:41:07.440339 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:41:07.457277 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:41:07.457336 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:41:07.474336 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:41:07.474401 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:41:07.491270 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:41:07.491323 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:41:07.508500 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:41:07.513050 systemd-networkd[750]: eth0: DHCPv6 lease lost Sep 12 17:41:07.536301 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:41:07.544671 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:41:07.544796 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:41:07.559585 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:41:07.559702 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:41:07.580401 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:41:07.580452 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:41:07.600097 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:41:07.620036 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:41:07.620122 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:41:07.640149 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:41:07.640249 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:41:07.650147 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:41:07.650261 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:41:07.673311 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:41:07.673489 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:41:08.113076 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 12 17:41:07.691311 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:41:07.714392 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:41:07.714613 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:41:07.726578 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:41:07.726684 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:41:07.745697 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:41:07.745772 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:41:07.771229 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:41:07.771280 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:41:07.779273 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:41:07.779332 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:41:07.830082 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:41:07.830298 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:41:07.858224 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:41:07.858419 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:41:07.908156 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:41:07.920043 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:41:07.920127 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:41:07.931132 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:41:07.931222 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:41:07.942237 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:41:07.942293 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:41:07.953262 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:41:07.953314 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:41:07.989661 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:41:07.989768 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:41:08.000351 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:41:08.023169 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:41:08.069328 systemd[1]: Switching root. Sep 12 17:41:08.401076 systemd-journald[183]: Journal stopped Sep 12 17:41:10.768456 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:41:10.768493 kernel: SELinux: policy capability open_perms=1 Sep 12 17:41:10.768508 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:41:10.768521 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:41:10.768532 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:41:10.768542 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:41:10.768555 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:41:10.768570 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:41:10.768581 kernel: audit: type=1403 audit(1757698868.729:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:41:10.768595 systemd[1]: Successfully loaded SELinux policy in 78.793ms. Sep 12 17:41:10.768609 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.604ms. Sep 12 17:41:10.768622 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:41:10.768634 systemd[1]: Detected virtualization google. Sep 12 17:41:10.768646 systemd[1]: Detected architecture x86-64. Sep 12 17:41:10.768663 systemd[1]: Detected first boot. Sep 12 17:41:10.768676 systemd[1]: Initializing machine ID from random generator. Sep 12 17:41:10.768689 zram_generator::config[1025]: No configuration found. Sep 12 17:41:10.768703 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:41:10.768715 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:41:10.768731 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:41:10.768743 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:41:10.768757 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:41:10.768770 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:41:10.768783 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:41:10.768797 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:41:10.768811 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:41:10.768827 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:41:10.768840 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:41:10.768853 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:41:10.768866 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:41:10.768879 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:41:10.768892 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:41:10.768905 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:41:10.768918 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:41:10.768934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:41:10.768949 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:41:10.768987 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:41:10.769009 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:41:10.769028 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:41:10.769049 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:41:10.769077 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:41:10.769096 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:41:10.769117 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:41:10.769144 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:41:10.769165 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:41:10.769193 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:41:10.769214 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:41:10.769235 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:41:10.769256 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:41:10.769277 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:41:10.769304 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:41:10.769327 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:41:10.769348 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:41:10.769372 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:41:10.769394 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:41:10.769421 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:41:10.769443 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:41:10.769469 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:41:10.769494 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:41:10.769516 systemd[1]: Reached target machines.target - Containers. Sep 12 17:41:10.769538 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:41:10.769562 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:41:10.769586 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:41:10.769614 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:41:10.769639 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:41:10.769662 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:41:10.769686 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:41:10.769711 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:41:10.769735 kernel: fuse: init (API version 7.39) Sep 12 17:41:10.769758 kernel: ACPI: bus type drm_connector registered Sep 12 17:41:10.769781 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:41:10.769808 kernel: loop: module loaded Sep 12 17:41:10.769831 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:41:10.769855 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:41:10.769879 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:41:10.769903 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:41:10.769927 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:41:10.769950 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:41:10.770006 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:41:10.770031 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:41:10.770089 systemd-journald[1112]: Collecting audit messages is disabled. Sep 12 17:41:10.770128 systemd-journald[1112]: Journal started Sep 12 17:41:10.770174 systemd-journald[1112]: Runtime Journal (/run/log/journal/5d72032a82cb4600bcdbf47274ce1fed) is 8.0M, max 148.7M, 140.7M free. Sep 12 17:41:09.561946 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:41:09.584571 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 17:41:09.585134 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:41:10.785995 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:41:10.811991 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:41:10.829062 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:41:10.829121 systemd[1]: Stopped verity-setup.service. Sep 12 17:41:10.859989 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:41:10.868994 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:41:10.879476 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:41:10.889292 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:41:10.899265 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:41:10.909267 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:41:10.919250 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:41:10.929222 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:41:10.939326 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:41:10.950337 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:41:10.961365 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:41:10.961589 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:41:10.973364 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:41:10.973581 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:41:10.985363 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:41:10.985574 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:41:10.995350 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:41:10.995561 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:41:11.007345 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:41:11.007554 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:41:11.017405 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:41:11.017619 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:41:11.027360 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:41:11.037359 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:41:11.048349 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:41:11.059324 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:41:11.083334 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:41:11.105104 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:41:11.127095 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:41:11.137096 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:41:11.137156 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:41:11.148269 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:41:11.171186 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:41:11.189168 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:41:11.199248 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:41:11.205233 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:41:11.224506 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:41:11.237576 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:41:11.250161 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:41:11.260133 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:41:11.269797 systemd-journald[1112]: Time spent on flushing to /var/log/journal/5d72032a82cb4600bcdbf47274ce1fed is 53.364ms for 929 entries. Sep 12 17:41:11.269797 systemd-journald[1112]: System Journal (/var/log/journal/5d72032a82cb4600bcdbf47274ce1fed) is 8.0M, max 584.8M, 576.8M free. Sep 12 17:41:11.348147 systemd-journald[1112]: Received client request to flush runtime journal. Sep 12 17:41:11.268210 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:41:11.293216 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:41:11.314200 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:41:11.335362 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:41:11.353390 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:41:11.374753 kernel: loop0: detected capacity change from 0 to 54824 Sep 12 17:41:11.368948 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:41:11.380422 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:41:11.392628 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:41:11.404493 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:41:11.416531 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:41:11.436561 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Sep 12 17:41:11.443162 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:41:11.443946 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Sep 12 17:41:11.448696 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:41:11.469278 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:41:11.479275 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:41:11.491029 kernel: loop1: detected capacity change from 0 to 140768 Sep 12 17:41:11.506386 udevadm[1145]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:41:11.519466 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:41:11.537807 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:41:11.541768 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:41:11.585431 kernel: loop2: detected capacity change from 0 to 142488 Sep 12 17:41:11.645200 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:41:11.667228 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:41:11.701472 kernel: loop3: detected capacity change from 0 to 229808 Sep 12 17:41:11.725169 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Sep 12 17:41:11.725624 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Sep 12 17:41:11.737481 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:41:11.816364 kernel: loop4: detected capacity change from 0 to 54824 Sep 12 17:41:11.849992 kernel: loop5: detected capacity change from 0 to 140768 Sep 12 17:41:11.904029 kernel: loop6: detected capacity change from 0 to 142488 Sep 12 17:41:11.958454 kernel: loop7: detected capacity change from 0 to 229808 Sep 12 17:41:11.992589 (sd-merge)[1170]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Sep 12 17:41:11.993657 (sd-merge)[1170]: Merged extensions into '/usr'. Sep 12 17:41:12.006222 systemd[1]: Reloading requested from client PID 1143 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:41:12.006244 systemd[1]: Reloading... Sep 12 17:41:12.174737 zram_generator::config[1196]: No configuration found. Sep 12 17:41:12.358366 ldconfig[1138]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:41:12.457057 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:41:12.563616 systemd[1]: Reloading finished in 556 ms. Sep 12 17:41:12.599461 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:41:12.609510 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:41:12.633237 systemd[1]: Starting ensure-sysext.service... Sep 12 17:41:12.651341 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:41:12.673116 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:41:12.673135 systemd[1]: Reloading... Sep 12 17:41:12.697472 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:41:12.699261 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:41:12.701295 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:41:12.702082 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Sep 12 17:41:12.702322 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Sep 12 17:41:12.707355 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:41:12.707472 systemd-tmpfiles[1237]: Skipping /boot Sep 12 17:41:12.723649 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:41:12.723681 systemd-tmpfiles[1237]: Skipping /boot Sep 12 17:41:12.791999 zram_generator::config[1263]: No configuration found. Sep 12 17:41:12.930833 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:41:12.995926 systemd[1]: Reloading finished in 322 ms. Sep 12 17:41:13.012661 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:41:13.031564 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:41:13.056309 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:41:13.076295 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:41:13.095341 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:41:13.115291 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:41:13.127122 augenrules[1325]: No rules Sep 12 17:41:13.133754 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:41:13.154863 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:41:13.167761 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:41:13.180820 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:41:13.194239 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Sep 12 17:41:13.202692 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:41:13.203366 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:41:13.211167 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:41:13.226266 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:41:13.245464 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:41:13.255232 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:41:13.264214 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:41:13.286215 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:41:13.296068 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:41:13.301444 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:41:13.317629 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:41:13.330730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:41:13.331666 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:41:13.344245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:41:13.344505 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:41:13.357142 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:41:13.357395 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:41:13.369178 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:41:13.383540 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:41:13.397354 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:41:13.460701 systemd[1]: Finished ensure-sysext.service. Sep 12 17:41:13.473593 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:41:13.474773 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:41:13.476171 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:41:13.486181 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:41:13.505185 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:41:13.518188 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:41:13.539192 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:41:13.558200 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:41:13.566251 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:41:13.579012 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:41:13.589141 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:41:13.602045 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1352) Sep 12 17:41:13.609126 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:41:13.609344 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:41:13.612745 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:41:13.613063 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:41:13.624624 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:41:13.626038 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:41:13.636602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:41:13.637685 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:41:13.649585 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:41:13.650034 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:41:13.708693 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:41:13.751375 systemd-resolved[1323]: Positive Trust Anchors: Sep 12 17:41:13.755037 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:41:13.755114 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:41:13.760244 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 12 17:41:13.781002 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 17:41:13.784303 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Sep 12 17:41:13.787648 systemd-resolved[1323]: Defaulting to hostname 'linux'. Sep 12 17:41:13.804670 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:41:13.814562 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:41:13.814674 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:41:13.815038 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:41:13.827010 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 17:41:13.839981 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 12 17:41:13.846174 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:41:13.903067 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:41:13.922051 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Sep 12 17:41:13.926160 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:41:13.936014 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 12 17:41:13.946504 systemd-networkd[1383]: lo: Link UP Sep 12 17:41:13.950007 systemd-networkd[1383]: lo: Gained carrier Sep 12 17:41:13.951984 kernel: ACPI: button: Sleep Button [SLPF] Sep 12 17:41:13.957471 systemd-networkd[1383]: Enumeration completed Sep 12 17:41:13.958171 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:41:13.960254 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:41:13.960270 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:41:13.962124 systemd-networkd[1383]: eth0: Link UP Sep 12 17:41:13.962955 systemd-networkd[1383]: eth0: Gained carrier Sep 12 17:41:13.963143 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:41:13.980008 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:41:13.982901 systemd[1]: Reached target network.target - Network. Sep 12 17:41:13.983068 systemd-networkd[1383]: eth0: DHCPv4 address 10.128.0.94/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 12 17:41:13.992030 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:41:14.002785 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:41:14.020241 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:41:14.038889 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:41:14.056227 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:41:14.074031 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:41:14.103314 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:41:14.104782 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:41:14.112430 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:41:14.123013 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:41:14.150003 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:41:14.161471 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:41:14.173943 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:41:14.184272 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:41:14.195127 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:41:14.206305 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:41:14.216200 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:41:14.227084 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:41:14.238099 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:41:14.238162 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:41:14.246083 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:41:14.256123 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:41:14.267724 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:41:14.284753 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:41:14.294854 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:41:14.305209 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:41:14.315060 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:41:14.323133 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:41:14.323189 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:41:14.333122 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:41:14.348182 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:41:14.367187 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:41:14.385409 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:41:14.408389 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:41:14.418078 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:41:14.424514 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:41:14.425437 jq[1428]: false Sep 12 17:41:14.445329 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:41:14.463200 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:41:14.470615 coreos-metadata[1426]: Sep 12 17:41:14.470 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Sep 12 17:41:14.475750 coreos-metadata[1426]: Sep 12 17:41:14.475 INFO Fetch successful Sep 12 17:41:14.475944 coreos-metadata[1426]: Sep 12 17:41:14.475 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Sep 12 17:41:14.478255 coreos-metadata[1426]: Sep 12 17:41:14.478 INFO Fetch successful Sep 12 17:41:14.478658 coreos-metadata[1426]: Sep 12 17:41:14.478 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Sep 12 17:41:14.479188 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:41:14.480092 coreos-metadata[1426]: Sep 12 17:41:14.479 INFO Fetch successful Sep 12 17:41:14.480238 coreos-metadata[1426]: Sep 12 17:41:14.480 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Sep 12 17:41:14.480541 coreos-metadata[1426]: Sep 12 17:41:14.480 INFO Fetch successful Sep 12 17:41:14.499730 extend-filesystems[1429]: Found loop4 Sep 12 17:41:14.501692 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:41:14.505704 dbus-daemon[1427]: [system] SELinux support is enabled Sep 12 17:41:14.521259 extend-filesystems[1429]: Found loop5 Sep 12 17:41:14.521259 extend-filesystems[1429]: Found loop6 Sep 12 17:41:14.521259 extend-filesystems[1429]: Found loop7 Sep 12 17:41:14.521259 extend-filesystems[1429]: Found sda Sep 12 17:41:14.521259 extend-filesystems[1429]: Found sda1 Sep 12 17:41:14.521259 extend-filesystems[1429]: Found sda2 Sep 12 17:41:14.521259 extend-filesystems[1429]: Found sda3 Sep 12 17:41:14.521259 extend-filesystems[1429]: Found usr Sep 12 17:41:14.521259 extend-filesystems[1429]: Found sda4 Sep 12 17:41:14.521259 extend-filesystems[1429]: Found sda6 Sep 12 17:41:14.521259 extend-filesystems[1429]: Found sda7 Sep 12 17:41:14.521259 extend-filesystems[1429]: Found sda9 Sep 12 17:41:14.521259 extend-filesystems[1429]: Checking size of /dev/sda9 Sep 12 17:41:14.694120 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 12 17:41:14.694167 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 12 17:41:14.694202 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1343) Sep 12 17:41:14.512173 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:41:14.513306 dbus-daemon[1427]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1383 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:41:14.694530 extend-filesystems[1429]: Resized partition /dev/sda9 Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:30:39 UTC 2025 (1): Starting Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: ---------------------------------------------------- Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: corporation. Support and training for ntp-4 are Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: available at https://www.nwtime.org/support Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: ---------------------------------------------------- Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: proto: precision = 0.111 usec (-23) Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: basedate set to 2025-08-31 Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: gps base set to 2025-08-31 (week 2382) Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: Listen normally on 3 eth0 10.128.0.94:123 Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: Listen normally on 4 lo [::1]:123 Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: bind(21) AF_INET6 fe80::4001:aff:fe80:5e%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:5e%2#123 Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: failed to init interface for address fe80::4001:aff:fe80:5e%2 Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: Listening on routing socket on fd #21 for interface updates Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:41:14.704069 ntpd[1434]: 12 Sep 17:41:14 ntpd[1434]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:41:14.526678 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 12 17:41:14.561594 ntpd[1434]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:30:39 UTC 2025 (1): Starting Sep 12 17:41:14.705710 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:41:14.705710 extend-filesystems[1453]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 12 17:41:14.705710 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 12 17:41:14.705710 extend-filesystems[1453]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 12 17:41:14.527355 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:41:14.561624 ntpd[1434]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:41:14.783479 extend-filesystems[1429]: Resized filesystem in /dev/sda9 Sep 12 17:41:14.535266 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:41:14.561639 ntpd[1434]: ---------------------------------------------------- Sep 12 17:41:14.794589 update_engine[1450]: I20250912 17:41:14.640887 1450 main.cc:92] Flatcar Update Engine starting Sep 12 17:41:14.794589 update_engine[1450]: I20250912 17:41:14.655658 1450 update_check_scheduler.cc:74] Next update check in 10m23s Sep 12 17:41:14.579214 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:41:14.561655 ntpd[1434]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:41:14.802078 jq[1456]: true Sep 12 17:41:14.605831 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:41:14.561669 ntpd[1434]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:41:14.641491 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:41:14.561682 ntpd[1434]: corporation. Support and training for ntp-4 are Sep 12 17:41:14.641763 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:41:14.561696 ntpd[1434]: available at https://www.nwtime.org/support Sep 12 17:41:14.643297 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:41:14.561710 ntpd[1434]: ---------------------------------------------------- Sep 12 17:41:14.643588 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:41:14.567426 ntpd[1434]: proto: precision = 0.111 usec (-23) Sep 12 17:41:14.673497 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:41:14.570597 ntpd[1434]: basedate set to 2025-08-31 Sep 12 17:41:14.674278 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:41:14.570623 ntpd[1434]: gps base set to 2025-08-31 (week 2382) Sep 12 17:41:14.720584 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:41:14.575173 ntpd[1434]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:41:14.721462 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:41:14.575234 ntpd[1434]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:41:14.727857 systemd-logind[1445]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 17:41:14.577174 ntpd[1434]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:41:14.727888 systemd-logind[1445]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 12 17:41:14.577686 ntpd[1434]: Listen normally on 3 eth0 10.128.0.94:123 Sep 12 17:41:14.727918 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:41:14.577770 ntpd[1434]: Listen normally on 4 lo [::1]:123 Sep 12 17:41:14.728412 systemd-logind[1445]: New seat seat0. Sep 12 17:41:14.578091 ntpd[1434]: bind(21) AF_INET6 fe80::4001:aff:fe80:5e%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:41:14.739565 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:41:14.578128 ntpd[1434]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:5e%2#123 Sep 12 17:41:14.806499 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:41:14.578153 ntpd[1434]: failed to init interface for address fe80::4001:aff:fe80:5e%2 Sep 12 17:41:14.578201 ntpd[1434]: Listening on routing socket on fd #21 for interface updates Sep 12 17:41:14.588058 ntpd[1434]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:41:14.588095 ntpd[1434]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:41:14.820181 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:41:14.825640 jq[1464]: true Sep 12 17:41:14.851837 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:41:14.893268 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:41:14.901993 tar[1463]: linux-amd64/LICENSE Sep 12 17:41:14.901993 tar[1463]: linux-amd64/helm Sep 12 17:41:14.910548 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:41:14.921083 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:41:14.921329 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:41:14.921567 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:41:14.944299 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:41:14.954104 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:41:14.954362 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:41:14.973615 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:41:14.991448 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:41:15.000111 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:41:15.022312 systemd[1]: Starting sshkeys.service... Sep 12 17:41:15.088771 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:41:15.106806 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:41:15.264537 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:41:15.264894 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:41:15.266817 dbus-daemon[1427]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1492 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:41:15.298069 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:41:15.312115 coreos-metadata[1500]: Sep 12 17:41:15.312 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 12 17:41:15.342127 coreos-metadata[1500]: Sep 12 17:41:15.319 INFO Fetch failed with 404: resource not found Sep 12 17:41:15.342127 coreos-metadata[1500]: Sep 12 17:41:15.319 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 12 17:41:15.342127 coreos-metadata[1500]: Sep 12 17:41:15.335 INFO Fetch successful Sep 12 17:41:15.342127 coreos-metadata[1500]: Sep 12 17:41:15.335 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 12 17:41:15.342127 coreos-metadata[1500]: Sep 12 17:41:15.341 INFO Fetch failed with 404: resource not found Sep 12 17:41:15.342127 coreos-metadata[1500]: Sep 12 17:41:15.341 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 12 17:41:15.345192 coreos-metadata[1500]: Sep 12 17:41:15.345 INFO Fetch failed with 404: resource not found Sep 12 17:41:15.345192 coreos-metadata[1500]: Sep 12 17:41:15.345 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 12 17:41:15.345922 coreos-metadata[1500]: Sep 12 17:41:15.345 INFO Fetch successful Sep 12 17:41:15.357930 unknown[1500]: wrote ssh authorized keys file for user: core Sep 12 17:41:15.436111 update-ssh-keys[1512]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:41:15.435942 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:41:15.438797 polkitd[1507]: Started polkitd version 121 Sep 12 17:41:15.456786 systemd[1]: Finished sshkeys.service. Sep 12 17:41:15.477569 polkitd[1507]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:41:15.477667 polkitd[1507]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:41:15.483824 polkitd[1507]: Finished loading, compiling and executing 2 rules Sep 12 17:41:15.485506 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:41:15.485720 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:41:15.490046 polkitd[1507]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:41:15.548584 systemd-hostnamed[1492]: Hostname set to (transient) Sep 12 17:41:15.549209 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:41:15.550121 systemd-resolved[1323]: System hostname changed to 'ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal'. Sep 12 17:41:15.563163 ntpd[1434]: bind(24) AF_INET6 fe80::4001:aff:fe80:5e%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:41:15.564601 ntpd[1434]: 12 Sep 17:41:15 ntpd[1434]: bind(24) AF_INET6 fe80::4001:aff:fe80:5e%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:41:15.564601 ntpd[1434]: 12 Sep 17:41:15 ntpd[1434]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:5e%2#123 Sep 12 17:41:15.564601 ntpd[1434]: 12 Sep 17:41:15 ntpd[1434]: failed to init interface for address fe80::4001:aff:fe80:5e%2 Sep 12 17:41:15.563219 ntpd[1434]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:5e%2#123 Sep 12 17:41:15.563248 ntpd[1434]: failed to init interface for address fe80::4001:aff:fe80:5e%2 Sep 12 17:41:15.587416 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:41:15.592239 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:41:15.611940 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:41:15.629908 systemd[1]: Started sshd@0-10.128.0.94:22-139.178.89.65:32784.service - OpenSSH per-connection server daemon (139.178.89.65:32784). Sep 12 17:41:15.634659 containerd[1466]: time="2025-09-12T17:41:15.634557031Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:41:15.657722 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:41:15.658304 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:41:15.681401 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:41:15.726380 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:41:15.747475 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:41:15.754653 containerd[1466]: time="2025-09-12T17:41:15.754565812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:15.760832 containerd[1466]: time="2025-09-12T17:41:15.760781276Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:41:15.760998 containerd[1466]: time="2025-09-12T17:41:15.760944396Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:41:15.761917 containerd[1466]: time="2025-09-12T17:41:15.761685482Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:41:15.761824 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:41:15.763005 containerd[1466]: time="2025-09-12T17:41:15.762618457Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:41:15.763005 containerd[1466]: time="2025-09-12T17:41:15.762663275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:15.763005 containerd[1466]: time="2025-09-12T17:41:15.762802333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:41:15.763005 containerd[1466]: time="2025-09-12T17:41:15.762829470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:15.763585 containerd[1466]: time="2025-09-12T17:41:15.763548198Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:41:15.763704 containerd[1466]: time="2025-09-12T17:41:15.763681350Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:15.763807 containerd[1466]: time="2025-09-12T17:41:15.763784065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:41:15.763889 containerd[1466]: time="2025-09-12T17:41:15.763869084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:15.764323 containerd[1466]: time="2025-09-12T17:41:15.764286177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:15.764856 containerd[1466]: time="2025-09-12T17:41:15.764780071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:15.765630 containerd[1466]: time="2025-09-12T17:41:15.765143143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:41:15.765630 containerd[1466]: time="2025-09-12T17:41:15.765178811Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:41:15.765630 containerd[1466]: time="2025-09-12T17:41:15.765340282Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:41:15.765630 containerd[1466]: time="2025-09-12T17:41:15.765417920Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:41:15.772471 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:41:15.776402 containerd[1466]: time="2025-09-12T17:41:15.776363582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:41:15.776509 containerd[1466]: time="2025-09-12T17:41:15.776434450Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:41:15.776509 containerd[1466]: time="2025-09-12T17:41:15.776464264Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:41:15.776509 containerd[1466]: time="2025-09-12T17:41:15.776493233Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:41:15.776665 containerd[1466]: time="2025-09-12T17:41:15.776521239Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:41:15.778017 containerd[1466]: time="2025-09-12T17:41:15.776730767Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:41:15.778017 containerd[1466]: time="2025-09-12T17:41:15.777307117Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:41:15.778017 containerd[1466]: time="2025-09-12T17:41:15.777472711Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:41:15.778017 containerd[1466]: time="2025-09-12T17:41:15.777493965Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:41:15.778017 containerd[1466]: time="2025-09-12T17:41:15.777508475Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:41:15.778017 containerd[1466]: time="2025-09-12T17:41:15.777524896Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:41:15.778017 containerd[1466]: time="2025-09-12T17:41:15.777540666Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:41:15.778017 containerd[1466]: time="2025-09-12T17:41:15.777555645Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:41:15.778017 containerd[1466]: time="2025-09-12T17:41:15.777571799Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:41:15.778017 containerd[1466]: time="2025-09-12T17:41:15.777588590Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:41:15.778017 containerd[1466]: time="2025-09-12T17:41:15.777604214Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:41:15.778017 containerd[1466]: time="2025-09-12T17:41:15.777619879Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:41:15.778017 containerd[1466]: time="2025-09-12T17:41:15.777633582Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:41:15.778017 containerd[1466]: time="2025-09-12T17:41:15.777656942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.778815 containerd[1466]: time="2025-09-12T17:41:15.777673681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.778815 containerd[1466]: time="2025-09-12T17:41:15.777690973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.778815 containerd[1466]: time="2025-09-12T17:41:15.777706916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.778815 containerd[1466]: time="2025-09-12T17:41:15.777721909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.778815 containerd[1466]: time="2025-09-12T17:41:15.777751450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.778815 containerd[1466]: time="2025-09-12T17:41:15.777771211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.778815 containerd[1466]: time="2025-09-12T17:41:15.777786137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.778815 containerd[1466]: time="2025-09-12T17:41:15.777800770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.778815 containerd[1466]: time="2025-09-12T17:41:15.777817122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.778815 containerd[1466]: time="2025-09-12T17:41:15.777830315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.778815 containerd[1466]: time="2025-09-12T17:41:15.777844205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.778815 containerd[1466]: time="2025-09-12T17:41:15.777859224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.778815 containerd[1466]: time="2025-09-12T17:41:15.777875689Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:41:15.778815 containerd[1466]: time="2025-09-12T17:41:15.777898968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.778815 containerd[1466]: time="2025-09-12T17:41:15.777912837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.779513 containerd[1466]: time="2025-09-12T17:41:15.777926682Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:41:15.779513 containerd[1466]: time="2025-09-12T17:41:15.778005492Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:41:15.779513 containerd[1466]: time="2025-09-12T17:41:15.778034215Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:41:15.779513 containerd[1466]: time="2025-09-12T17:41:15.778054746Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:41:15.779513 containerd[1466]: time="2025-09-12T17:41:15.778076812Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:41:15.779513 containerd[1466]: time="2025-09-12T17:41:15.778096274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.779513 containerd[1466]: time="2025-09-12T17:41:15.778121859Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:41:15.779513 containerd[1466]: time="2025-09-12T17:41:15.778150048Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:41:15.779513 containerd[1466]: time="2025-09-12T17:41:15.778170275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:41:15.779923 containerd[1466]: time="2025-09-12T17:41:15.778662396Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:41:15.779923 containerd[1466]: time="2025-09-12T17:41:15.778764250Z" level=info msg="Connect containerd service" Sep 12 17:41:15.779923 containerd[1466]: time="2025-09-12T17:41:15.778848267Z" level=info msg="using legacy CRI server" Sep 12 17:41:15.779923 containerd[1466]: time="2025-09-12T17:41:15.778863910Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:41:15.779923 containerd[1466]: time="2025-09-12T17:41:15.779053873Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:41:15.783471 containerd[1466]: time="2025-09-12T17:41:15.783055024Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:41:15.783471 containerd[1466]: time="2025-09-12T17:41:15.783238578Z" level=info msg="Start subscribing containerd event" Sep 12 17:41:15.783471 containerd[1466]: time="2025-09-12T17:41:15.783404557Z" level=info msg="Start recovering state" Sep 12 17:41:15.783850 containerd[1466]: time="2025-09-12T17:41:15.783819261Z" level=info msg="Start event monitor" Sep 12 17:41:15.783920 containerd[1466]: time="2025-09-12T17:41:15.783851717Z" level=info msg="Start snapshots syncer" Sep 12 17:41:15.783920 containerd[1466]: time="2025-09-12T17:41:15.783869730Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:41:15.783920 containerd[1466]: time="2025-09-12T17:41:15.783885185Z" level=info msg="Start streaming server" Sep 12 17:41:15.784508 containerd[1466]: time="2025-09-12T17:41:15.784456424Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:41:15.784701 containerd[1466]: time="2025-09-12T17:41:15.784671790Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:41:15.788152 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:41:15.788426 containerd[1466]: time="2025-09-12T17:41:15.788297652Z" level=info msg="containerd successfully booted in 0.155837s" Sep 12 17:41:15.867216 systemd-networkd[1383]: eth0: Gained IPv6LL Sep 12 17:41:15.873152 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:41:15.884854 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:41:15.904295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:15.925060 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:41:15.941305 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Sep 12 17:41:15.974628 init.sh[1550]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 12 17:41:15.978208 init.sh[1550]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 12 17:41:15.978208 init.sh[1550]: + /usr/bin/google_instance_setup Sep 12 17:41:15.988152 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:41:16.089506 sshd[1535]: Accepted publickey for core from 139.178.89.65 port 32784 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:41:16.094098 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:16.112062 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:41:16.132401 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:41:16.146180 systemd-logind[1445]: New session 1 of user core. Sep 12 17:41:16.178236 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:41:16.198419 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:41:16.230926 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:41:16.346060 tar[1463]: linux-amd64/README.md Sep 12 17:41:16.367567 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:41:16.486803 systemd[1562]: Queued start job for default target default.target. Sep 12 17:41:16.493947 systemd[1562]: Created slice app.slice - User Application Slice. Sep 12 17:41:16.494032 systemd[1562]: Reached target paths.target - Paths. Sep 12 17:41:16.494058 systemd[1562]: Reached target timers.target - Timers. Sep 12 17:41:16.496119 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:41:16.527077 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:41:16.527271 systemd[1562]: Reached target sockets.target - Sockets. Sep 12 17:41:16.527308 systemd[1562]: Reached target basic.target - Basic System. Sep 12 17:41:16.527390 systemd[1562]: Reached target default.target - Main User Target. Sep 12 17:41:16.527448 systemd[1562]: Startup finished in 280ms. Sep 12 17:41:16.528245 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:41:16.548522 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:41:16.800553 instance-setup[1557]: INFO Running google_set_multiqueue. Sep 12 17:41:16.838918 instance-setup[1557]: INFO Set channels for eth0 to 2. Sep 12 17:41:16.849821 instance-setup[1557]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Sep 12 17:41:16.852153 instance-setup[1557]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Sep 12 17:41:16.852834 instance-setup[1557]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Sep 12 17:41:16.855147 instance-setup[1557]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Sep 12 17:41:16.856052 instance-setup[1557]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Sep 12 17:41:16.858180 instance-setup[1557]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Sep 12 17:41:16.859161 instance-setup[1557]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Sep 12 17:41:16.865376 systemd[1]: Started sshd@1-10.128.0.94:22-139.178.89.65:32792.service - OpenSSH per-connection server daemon (139.178.89.65:32792). Sep 12 17:41:16.867651 instance-setup[1557]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Sep 12 17:41:16.897364 instance-setup[1557]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 12 17:41:16.905832 instance-setup[1557]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 12 17:41:16.912925 instance-setup[1557]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 12 17:41:16.913015 instance-setup[1557]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 12 17:41:16.933674 init.sh[1550]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 12 17:41:17.107628 startup-script[1608]: INFO Starting startup scripts. Sep 12 17:41:17.112748 startup-script[1608]: INFO No startup scripts found in metadata. Sep 12 17:41:17.112830 startup-script[1608]: INFO Finished running startup scripts. Sep 12 17:41:17.133774 init.sh[1550]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 12 17:41:17.133774 init.sh[1550]: + daemon_pids=() Sep 12 17:41:17.133774 init.sh[1550]: + for d in accounts clock_skew network Sep 12 17:41:17.133774 init.sh[1550]: + daemon_pids+=($!) Sep 12 17:41:17.133774 init.sh[1550]: + for d in accounts clock_skew network Sep 12 17:41:17.134524 init.sh[1611]: + /usr/bin/google_accounts_daemon Sep 12 17:41:17.134866 init.sh[1550]: + daemon_pids+=($!) Sep 12 17:41:17.134866 init.sh[1550]: + for d in accounts clock_skew network Sep 12 17:41:17.134991 init.sh[1612]: + /usr/bin/google_clock_skew_daemon Sep 12 17:41:17.135322 init.sh[1550]: + daemon_pids+=($!) Sep 12 17:41:17.135322 init.sh[1550]: + NOTIFY_SOCKET=/run/systemd/notify Sep 12 17:41:17.135322 init.sh[1550]: + /usr/bin/systemd-notify --ready Sep 12 17:41:17.135474 init.sh[1613]: + /usr/bin/google_network_daemon Sep 12 17:41:17.145480 systemd[1]: Started oem-gce.service - GCE Linux Agent. Sep 12 17:41:17.164990 init.sh[1550]: + wait -n 1611 1612 1613 Sep 12 17:41:17.295833 sshd[1597]: Accepted publickey for core from 139.178.89.65 port 32792 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:41:17.296788 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:17.311069 systemd-logind[1445]: New session 2 of user core. Sep 12 17:41:17.316493 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:41:17.516863 google-networking[1613]: INFO Starting Google Networking daemon. Sep 12 17:41:17.530855 google-clock-skew[1612]: INFO Starting Google Clock Skew daemon. Sep 12 17:41:17.542596 google-clock-skew[1612]: INFO Clock drift token has changed: 0. Sep 12 17:41:17.580640 sshd[1597]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:17.591932 systemd[1]: sshd@1-10.128.0.94:22-139.178.89.65:32792.service: Deactivated successfully. Sep 12 17:41:17.595393 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:41:17.596799 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:41:17.599353 systemd-logind[1445]: Removed session 2. Sep 12 17:41:17.600542 groupadd[1625]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 12 17:41:17.604757 groupadd[1625]: group added to /etc/gshadow: name=google-sudoers Sep 12 17:41:17.654292 systemd[1]: Started sshd@2-10.128.0.94:22-139.178.89.65:32808.service - OpenSSH per-connection server daemon (139.178.89.65:32808). Sep 12 17:41:17.658369 groupadd[1625]: new group: name=google-sudoers, GID=1000 Sep 12 17:41:17.696354 google-accounts[1611]: INFO Starting Google Accounts daemon. Sep 12 17:41:18.001360 systemd-resolved[1323]: Clock change detected. Flushing caches. Sep 12 17:41:18.001871 google-clock-skew[1612]: INFO Synced system time with hardware clock. Sep 12 17:41:18.011401 google-accounts[1611]: WARNING OS Login not installed. Sep 12 17:41:18.012569 google-accounts[1611]: INFO Creating a new user account for 0. Sep 12 17:41:18.018013 init.sh[1638]: useradd: invalid user name '0': use --badname to ignore Sep 12 17:41:18.018299 google-accounts[1611]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 12 17:41:18.331264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:18.343076 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:41:18.346682 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:41:18.353982 systemd[1]: Startup finished in 990ms (kernel) + 9.963s (initrd) + 9.399s (userspace) = 20.353s. Sep 12 17:41:18.357936 sshd[1632]: Accepted publickey for core from 139.178.89.65 port 32808 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:41:18.359777 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:18.376546 systemd-logind[1445]: New session 3 of user core. Sep 12 17:41:18.381330 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:41:18.635857 sshd[1632]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:18.641418 systemd[1]: sshd@2-10.128.0.94:22-139.178.89.65:32808.service: Deactivated successfully. Sep 12 17:41:18.643913 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:41:18.646504 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:41:18.647954 systemd-logind[1445]: Removed session 3. Sep 12 17:41:18.863342 ntpd[1434]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:5e%2]:123 Sep 12 17:41:18.863925 ntpd[1434]: 12 Sep 17:41:18 ntpd[1434]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:5e%2]:123 Sep 12 17:41:19.252427 kubelet[1645]: E0912 17:41:19.252344 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:41:19.255610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:41:19.255872 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:41:19.256345 systemd[1]: kubelet.service: Consumed 1.301s CPU time. Sep 12 17:41:28.709462 systemd[1]: Started sshd@3-10.128.0.94:22-139.178.89.65:50314.service - OpenSSH per-connection server daemon (139.178.89.65:50314). Sep 12 17:41:29.083123 sshd[1662]: Accepted publickey for core from 139.178.89.65 port 50314 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:41:29.085014 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:29.091887 systemd-logind[1445]: New session 4 of user core. Sep 12 17:41:29.101409 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:41:29.302683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:41:29.308363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:29.360450 sshd[1662]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:29.366218 systemd[1]: sshd@3-10.128.0.94:22-139.178.89.65:50314.service: Deactivated successfully. Sep 12 17:41:29.370924 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:41:29.372355 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:41:29.374429 systemd-logind[1445]: Removed session 4. Sep 12 17:41:29.428486 systemd[1]: Started sshd@4-10.128.0.94:22-139.178.89.65:50322.service - OpenSSH per-connection server daemon (139.178.89.65:50322). Sep 12 17:41:29.667251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:29.679619 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:41:29.730915 kubelet[1679]: E0912 17:41:29.730876 1679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:41:29.735612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:41:29.735889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:41:29.809219 sshd[1672]: Accepted publickey for core from 139.178.89.65 port 50322 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:41:29.810961 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:29.817265 systemd-logind[1445]: New session 5 of user core. Sep 12 17:41:29.827365 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:41:30.079717 sshd[1672]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:30.085623 systemd[1]: sshd@4-10.128.0.94:22-139.178.89.65:50322.service: Deactivated successfully. Sep 12 17:41:30.087894 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:41:30.088790 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:41:30.090391 systemd-logind[1445]: Removed session 5. Sep 12 17:41:30.153469 systemd[1]: Started sshd@5-10.128.0.94:22-139.178.89.65:37854.service - OpenSSH per-connection server daemon (139.178.89.65:37854). Sep 12 17:41:30.524609 sshd[1691]: Accepted publickey for core from 139.178.89.65 port 37854 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:41:30.526144 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:30.531980 systemd-logind[1445]: New session 6 of user core. Sep 12 17:41:30.543280 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:41:30.798710 sshd[1691]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:30.803215 systemd[1]: sshd@5-10.128.0.94:22-139.178.89.65:37854.service: Deactivated successfully. Sep 12 17:41:30.805631 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:41:30.807638 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:41:30.808969 systemd-logind[1445]: Removed session 6. Sep 12 17:41:30.859877 systemd[1]: Started sshd@6-10.128.0.94:22-139.178.89.65:37870.service - OpenSSH per-connection server daemon (139.178.89.65:37870). Sep 12 17:41:31.227455 sshd[1698]: Accepted publickey for core from 139.178.89.65 port 37870 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:41:31.229531 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:31.236408 systemd-logind[1445]: New session 7 of user core. Sep 12 17:41:31.241300 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:41:31.457931 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:41:31.458467 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:41:31.471895 sudo[1701]: pam_unix(sudo:session): session closed for user root Sep 12 17:41:31.527137 sshd[1698]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:31.531986 systemd[1]: sshd@6-10.128.0.94:22-139.178.89.65:37870.service: Deactivated successfully. Sep 12 17:41:31.534366 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:41:31.536588 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:41:31.538160 systemd-logind[1445]: Removed session 7. Sep 12 17:41:31.603748 systemd[1]: Started sshd@7-10.128.0.94:22-139.178.89.65:37886.service - OpenSSH per-connection server daemon (139.178.89.65:37886). Sep 12 17:41:31.975308 sshd[1706]: Accepted publickey for core from 139.178.89.65 port 37886 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:41:31.977250 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:31.983587 systemd-logind[1445]: New session 8 of user core. Sep 12 17:41:31.994278 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:41:32.199763 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:41:32.200298 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:41:32.205133 sudo[1710]: pam_unix(sudo:session): session closed for user root Sep 12 17:41:32.217954 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:41:32.218453 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:41:32.234482 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:41:32.238028 auditctl[1713]: No rules Sep 12 17:41:32.239399 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:41:32.239687 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:41:32.243635 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:41:32.277753 augenrules[1731]: No rules Sep 12 17:41:32.279389 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:41:32.281535 sudo[1709]: pam_unix(sudo:session): session closed for user root Sep 12 17:41:32.339679 sshd[1706]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:32.344039 systemd[1]: sshd@7-10.128.0.94:22-139.178.89.65:37886.service: Deactivated successfully. Sep 12 17:41:32.346342 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:41:32.348146 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:41:32.349603 systemd-logind[1445]: Removed session 8. Sep 12 17:41:32.410446 systemd[1]: Started sshd@8-10.128.0.94:22-139.178.89.65:37894.service - OpenSSH per-connection server daemon (139.178.89.65:37894). Sep 12 17:41:32.780834 sshd[1739]: Accepted publickey for core from 139.178.89.65 port 37894 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:41:32.782675 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:32.788161 systemd-logind[1445]: New session 9 of user core. Sep 12 17:41:32.795289 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:41:33.004539 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:41:33.005031 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:41:33.433475 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:41:33.437373 (dockerd)[1757]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:41:33.872507 dockerd[1757]: time="2025-09-12T17:41:33.871759612Z" level=info msg="Starting up" Sep 12 17:41:34.021818 dockerd[1757]: time="2025-09-12T17:41:34.021767992Z" level=info msg="Loading containers: start." Sep 12 17:41:34.159303 kernel: Initializing XFRM netlink socket Sep 12 17:41:34.267239 systemd-networkd[1383]: docker0: Link UP Sep 12 17:41:34.281823 dockerd[1757]: time="2025-09-12T17:41:34.281770415Z" level=info msg="Loading containers: done." Sep 12 17:41:34.301992 dockerd[1757]: time="2025-09-12T17:41:34.301930809Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:41:34.302268 dockerd[1757]: time="2025-09-12T17:41:34.302115575Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:41:34.302404 dockerd[1757]: time="2025-09-12T17:41:34.302279256Z" level=info msg="Daemon has completed initialization" Sep 12 17:41:34.336844 dockerd[1757]: time="2025-09-12T17:41:34.336672880Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:41:34.337079 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:41:35.337334 containerd[1466]: time="2025-09-12T17:41:35.337288072Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:41:35.871325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1516102336.mount: Deactivated successfully. Sep 12 17:41:37.694974 containerd[1466]: time="2025-09-12T17:41:37.694905467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:37.696617 containerd[1466]: time="2025-09-12T17:41:37.696550222Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30122476" Sep 12 17:41:37.698030 containerd[1466]: time="2025-09-12T17:41:37.697499422Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:37.701226 containerd[1466]: time="2025-09-12T17:41:37.701181202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:37.702955 containerd[1466]: time="2025-09-12T17:41:37.702906570Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.365559669s" Sep 12 17:41:37.703176 containerd[1466]: time="2025-09-12T17:41:37.703143570Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 17:41:37.704719 containerd[1466]: time="2025-09-12T17:41:37.704476762Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:41:39.349550 containerd[1466]: time="2025-09-12T17:41:39.349488764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:39.351182 containerd[1466]: time="2025-09-12T17:41:39.351112485Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26022778" Sep 12 17:41:39.352760 containerd[1466]: time="2025-09-12T17:41:39.352181062Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:39.355703 containerd[1466]: time="2025-09-12T17:41:39.355661578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:39.357209 containerd[1466]: time="2025-09-12T17:41:39.357168967Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.652647034s" Sep 12 17:41:39.357357 containerd[1466]: time="2025-09-12T17:41:39.357331152Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 17:41:39.358042 containerd[1466]: time="2025-09-12T17:41:39.357999632Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:41:39.986319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:41:39.997384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:40.369307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:40.384162 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:41:40.477218 kubelet[1969]: E0912 17:41:40.476723 1969 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:41:40.484970 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:41:40.485239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:41:41.023494 containerd[1466]: time="2025-09-12T17:41:41.023427231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:41.025029 containerd[1466]: time="2025-09-12T17:41:41.024965298Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20157484" Sep 12 17:41:41.026497 containerd[1466]: time="2025-09-12T17:41:41.025935160Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:41.029419 containerd[1466]: time="2025-09-12T17:41:41.029379683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:41.031782 containerd[1466]: time="2025-09-12T17:41:41.030999331Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.672952622s" Sep 12 17:41:41.031782 containerd[1466]: time="2025-09-12T17:41:41.031044519Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 17:41:41.032681 containerd[1466]: time="2025-09-12T17:41:41.032631182Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:41:42.323932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4099184679.mount: Deactivated successfully. Sep 12 17:41:43.062560 containerd[1466]: time="2025-09-12T17:41:43.062495194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:43.063966 containerd[1466]: time="2025-09-12T17:41:43.063735159Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31931364" Sep 12 17:41:43.066388 containerd[1466]: time="2025-09-12T17:41:43.064937506Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:43.068449 containerd[1466]: time="2025-09-12T17:41:43.067437477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:43.068449 containerd[1466]: time="2025-09-12T17:41:43.068278889Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.03560511s" Sep 12 17:41:43.068449 containerd[1466]: time="2025-09-12T17:41:43.068320333Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 17:41:43.069195 containerd[1466]: time="2025-09-12T17:41:43.069119021Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:41:43.468966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888255433.mount: Deactivated successfully. Sep 12 17:41:44.740574 containerd[1466]: time="2025-09-12T17:41:44.740500053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:44.742310 containerd[1466]: time="2025-09-12T17:41:44.742244924Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20948880" Sep 12 17:41:44.743297 containerd[1466]: time="2025-09-12T17:41:44.743254802Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:44.747134 containerd[1466]: time="2025-09-12T17:41:44.746785051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:44.748582 containerd[1466]: time="2025-09-12T17:41:44.748365983Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.679203543s" Sep 12 17:41:44.748582 containerd[1466]: time="2025-09-12T17:41:44.748412774Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 17:41:44.749554 containerd[1466]: time="2025-09-12T17:41:44.749312720Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:41:45.216934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3942761925.mount: Deactivated successfully. Sep 12 17:41:45.222073 containerd[1466]: time="2025-09-12T17:41:45.222018004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:45.223270 containerd[1466]: time="2025-09-12T17:41:45.223200424Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Sep 12 17:41:45.225929 containerd[1466]: time="2025-09-12T17:41:45.224247040Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:45.228266 containerd[1466]: time="2025-09-12T17:41:45.227128412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:45.228266 containerd[1466]: time="2025-09-12T17:41:45.228106906Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 478.70135ms" Sep 12 17:41:45.228266 containerd[1466]: time="2025-09-12T17:41:45.228156917Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:41:45.231138 containerd[1466]: time="2025-09-12T17:41:45.230145236Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:41:45.668030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2232470599.mount: Deactivated successfully. Sep 12 17:41:45.884522 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:41:48.729230 containerd[1466]: time="2025-09-12T17:41:48.729161425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:48.730952 containerd[1466]: time="2025-09-12T17:41:48.730884714Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58384071" Sep 12 17:41:48.732133 containerd[1466]: time="2025-09-12T17:41:48.731820526Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:48.737199 containerd[1466]: time="2025-09-12T17:41:48.736608624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:48.738207 containerd[1466]: time="2025-09-12T17:41:48.738164994Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.507979772s" Sep 12 17:41:48.738321 containerd[1466]: time="2025-09-12T17:41:48.738211913Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 17:41:50.614690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:41:50.623381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:50.935336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:50.939245 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:41:51.008114 kubelet[2131]: E0912 17:41:51.006745 2131 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:41:51.010306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:41:51.010599 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:41:52.460737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:52.467486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:52.512584 systemd[1]: Reloading requested from client PID 2145 ('systemctl') (unit session-9.scope)... Sep 12 17:41:52.512613 systemd[1]: Reloading... Sep 12 17:41:52.705136 zram_generator::config[2188]: No configuration found. Sep 12 17:41:52.853183 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:41:52.958778 systemd[1]: Reloading finished in 445 ms. Sep 12 17:41:53.034685 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:53.038811 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:41:53.039142 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:53.045488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:53.327257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:53.340683 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:41:53.403902 kubelet[2239]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:41:53.404408 kubelet[2239]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:41:53.404408 kubelet[2239]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:41:53.404408 kubelet[2239]: I0912 17:41:53.404229 2239 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:41:53.939072 kubelet[2239]: I0912 17:41:53.939020 2239 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:41:53.939072 kubelet[2239]: I0912 17:41:53.939065 2239 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:41:53.943412 kubelet[2239]: I0912 17:41:53.939999 2239 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:41:53.989420 kubelet[2239]: E0912 17:41:53.989376 2239 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:41:53.989609 kubelet[2239]: I0912 17:41:53.989394 2239 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:41:54.005642 kubelet[2239]: E0912 17:41:54.005572 2239 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:41:54.005642 kubelet[2239]: I0912 17:41:54.005629 2239 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:41:54.010242 kubelet[2239]: I0912 17:41:54.010207 2239 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:41:54.010658 kubelet[2239]: I0912 17:41:54.010601 2239 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:41:54.010884 kubelet[2239]: I0912 17:41:54.010641 2239 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:41:54.010884 kubelet[2239]: I0912 17:41:54.010880 2239 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:41:54.011134 kubelet[2239]: I0912 17:41:54.010898 2239 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:41:54.012332 kubelet[2239]: I0912 17:41:54.012294 2239 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:41:54.016720 kubelet[2239]: I0912 17:41:54.016674 2239 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:41:54.016720 kubelet[2239]: I0912 17:41:54.016706 2239 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:41:54.018325 kubelet[2239]: I0912 17:41:54.017627 2239 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:41:54.018325 kubelet[2239]: I0912 17:41:54.017663 2239 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:41:54.025792 kubelet[2239]: E0912 17:41:54.025756 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:41:54.026357 kubelet[2239]: I0912 17:41:54.026330 2239 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:41:54.027397 kubelet[2239]: I0912 17:41:54.027352 2239 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:41:54.030262 kubelet[2239]: W0912 17:41:54.029241 2239 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:41:54.039739 kubelet[2239]: E0912 17:41:54.039689 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:41:54.045962 kubelet[2239]: I0912 17:41:54.045931 2239 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:41:54.046055 kubelet[2239]: I0912 17:41:54.046006 2239 server.go:1289] "Started kubelet" Sep 12 17:41:54.046264 kubelet[2239]: I0912 17:41:54.046172 2239 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:41:54.048142 kubelet[2239]: I0912 17:41:54.047517 2239 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:41:54.048337 kubelet[2239]: I0912 17:41:54.048296 2239 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:41:54.050804 kubelet[2239]: I0912 17:41:54.050783 2239 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:41:54.051581 kubelet[2239]: I0912 17:41:54.051547 2239 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:41:54.066838 kubelet[2239]: I0912 17:41:54.066814 2239 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:41:54.069425 kubelet[2239]: I0912 17:41:54.069401 2239 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:41:54.069935 kubelet[2239]: E0912 17:41:54.069903 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" not found" Sep 12 17:41:54.072119 kubelet[2239]: I0912 17:41:54.070796 2239 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:41:54.072119 kubelet[2239]: I0912 17:41:54.070894 2239 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:41:54.072119 kubelet[2239]: E0912 17:41:54.071756 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:41:54.072119 kubelet[2239]: E0912 17:41:54.071862 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.94:6443: connect: connection refused" interval="200ms" Sep 12 17:41:54.074484 kubelet[2239]: E0912 17:41:54.071938 2239 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.94:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.94:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal.186499d981cc9e8b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal,UID:ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal,},FirstTimestamp:2025-09-12 17:41:54.045959819 +0000 UTC m=+0.699140942,LastTimestamp:2025-09-12 17:41:54.045959819 +0000 UTC m=+0.699140942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal,}" Sep 12 17:41:54.075062 kubelet[2239]: I0912 17:41:54.075022 2239 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:41:54.075305 kubelet[2239]: I0912 17:41:54.075278 2239 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:41:54.079159 kubelet[2239]: I0912 17:41:54.078689 2239 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:41:54.092603 kubelet[2239]: I0912 17:41:54.092554 2239 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:41:54.094434 kubelet[2239]: I0912 17:41:54.094401 2239 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:41:54.094434 kubelet[2239]: I0912 17:41:54.094433 2239 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:41:54.094587 kubelet[2239]: I0912 17:41:54.094460 2239 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:41:54.094587 kubelet[2239]: I0912 17:41:54.094472 2239 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:41:54.094587 kubelet[2239]: E0912 17:41:54.094527 2239 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:41:54.105027 kubelet[2239]: E0912 17:41:54.104990 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:41:54.105169 kubelet[2239]: E0912 17:41:54.105153 2239 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:41:54.116647 kubelet[2239]: I0912 17:41:54.116618 2239 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:41:54.116647 kubelet[2239]: I0912 17:41:54.116639 2239 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:41:54.116788 kubelet[2239]: I0912 17:41:54.116662 2239 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:41:54.118869 kubelet[2239]: I0912 17:41:54.118837 2239 policy_none.go:49] "None policy: Start" Sep 12 17:41:54.118869 kubelet[2239]: I0912 17:41:54.118865 2239 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:41:54.118869 kubelet[2239]: I0912 17:41:54.118883 2239 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:41:54.126409 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:41:54.137357 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:41:54.141653 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:41:54.153662 kubelet[2239]: E0912 17:41:54.152979 2239 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:41:54.153662 kubelet[2239]: I0912 17:41:54.153233 2239 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:41:54.153662 kubelet[2239]: I0912 17:41:54.153284 2239 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:41:54.153662 kubelet[2239]: I0912 17:41:54.153595 2239 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:41:54.155798 kubelet[2239]: E0912 17:41:54.155752 2239 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:41:54.155980 kubelet[2239]: E0912 17:41:54.155948 2239 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" not found" Sep 12 17:41:54.217258 systemd[1]: Created slice kubepods-burstable-podb1fa1e1423d024a5e2a95d9e2fc4cd2f.slice - libcontainer container kubepods-burstable-podb1fa1e1423d024a5e2a95d9e2fc4cd2f.slice. Sep 12 17:41:54.224952 kubelet[2239]: E0912 17:41:54.224913 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" not found" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.231740 systemd[1]: Created slice kubepods-burstable-pod898b18e2cf43cca6b203522a285eb54a.slice - libcontainer container kubepods-burstable-pod898b18e2cf43cca6b203522a285eb54a.slice. Sep 12 17:41:54.234759 kubelet[2239]: E0912 17:41:54.234729 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" not found" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.237339 systemd[1]: Created slice kubepods-burstable-pod77144eb2856aff280e4fada180634348.slice - libcontainer container kubepods-burstable-pod77144eb2856aff280e4fada180634348.slice. Sep 12 17:41:54.240901 kubelet[2239]: E0912 17:41:54.240853 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" not found" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.260008 kubelet[2239]: I0912 17:41:54.259964 2239 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.260417 kubelet[2239]: E0912 17:41:54.260384 2239 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.94:6443/api/v1/nodes\": dial tcp 10.128.0.94:6443: connect: connection refused" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.272961 kubelet[2239]: E0912 17:41:54.272894 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.94:6443: connect: connection refused" interval="400ms" Sep 12 17:41:54.372516 kubelet[2239]: I0912 17:41:54.372443 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/77144eb2856aff280e4fada180634348-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"77144eb2856aff280e4fada180634348\") " pod="kube-system/kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.372516 kubelet[2239]: I0912 17:41:54.372505 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1fa1e1423d024a5e2a95d9e2fc4cd2f-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"b1fa1e1423d024a5e2a95d9e2fc4cd2f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.372759 kubelet[2239]: I0912 17:41:54.372540 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/898b18e2cf43cca6b203522a285eb54a-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"898b18e2cf43cca6b203522a285eb54a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.372759 kubelet[2239]: I0912 17:41:54.372605 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1fa1e1423d024a5e2a95d9e2fc4cd2f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"b1fa1e1423d024a5e2a95d9e2fc4cd2f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.372759 kubelet[2239]: I0912 17:41:54.372645 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1fa1e1423d024a5e2a95d9e2fc4cd2f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"b1fa1e1423d024a5e2a95d9e2fc4cd2f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.372759 kubelet[2239]: I0912 17:41:54.372673 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/898b18e2cf43cca6b203522a285eb54a-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"898b18e2cf43cca6b203522a285eb54a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.372957 kubelet[2239]: I0912 17:41:54.372701 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/898b18e2cf43cca6b203522a285eb54a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"898b18e2cf43cca6b203522a285eb54a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.372957 kubelet[2239]: I0912 17:41:54.372733 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/898b18e2cf43cca6b203522a285eb54a-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"898b18e2cf43cca6b203522a285eb54a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.372957 kubelet[2239]: I0912 17:41:54.372766 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/898b18e2cf43cca6b203522a285eb54a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"898b18e2cf43cca6b203522a285eb54a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.465962 kubelet[2239]: I0912 17:41:54.465912 2239 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.466532 kubelet[2239]: E0912 17:41:54.466316 2239 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.94:6443/api/v1/nodes\": dial tcp 10.128.0.94:6443: connect: connection refused" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.526517 containerd[1466]: time="2025-09-12T17:41:54.526345021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal,Uid:b1fa1e1423d024a5e2a95d9e2fc4cd2f,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:54.541485 containerd[1466]: time="2025-09-12T17:41:54.541129545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal,Uid:898b18e2cf43cca6b203522a285eb54a,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:54.541705 containerd[1466]: time="2025-09-12T17:41:54.541652588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal,Uid:77144eb2856aff280e4fada180634348,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:54.674210 kubelet[2239]: E0912 17:41:54.674157 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.94:6443: connect: connection refused" interval="800ms" Sep 12 17:41:54.871260 kubelet[2239]: I0912 17:41:54.871210 2239 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.871631 kubelet[2239]: E0912 17:41:54.871582 2239 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.94:6443/api/v1/nodes\": dial tcp 10.128.0.94:6443: connect: connection refused" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:54.953011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1916796298.mount: Deactivated successfully. Sep 12 17:41:54.960232 containerd[1466]: time="2025-09-12T17:41:54.960154215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:54.961471 containerd[1466]: time="2025-09-12T17:41:54.961410786Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:54.962751 containerd[1466]: time="2025-09-12T17:41:54.962693268Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:41:54.963118 containerd[1466]: time="2025-09-12T17:41:54.963045199Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Sep 12 17:41:54.964578 containerd[1466]: time="2025-09-12T17:41:54.964533675Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:54.967998 containerd[1466]: time="2025-09-12T17:41:54.966134002Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:54.967998 containerd[1466]: time="2025-09-12T17:41:54.966351083Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:41:54.968996 containerd[1466]: time="2025-09-12T17:41:54.968934538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:54.972998 containerd[1466]: time="2025-09-12T17:41:54.972767998Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 431.035429ms" Sep 12 17:41:54.975752 containerd[1466]: time="2025-09-12T17:41:54.975697448Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 449.249719ms" Sep 12 17:41:54.975939 containerd[1466]: time="2025-09-12T17:41:54.975896040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 434.678708ms" Sep 12 17:41:55.029374 kubelet[2239]: E0912 17:41:55.023078 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:41:55.180737 kubelet[2239]: E0912 17:41:55.180590 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:41:55.184604 containerd[1466]: time="2025-09-12T17:41:55.184328075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:55.184604 containerd[1466]: time="2025-09-12T17:41:55.184386911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:55.184604 containerd[1466]: time="2025-09-12T17:41:55.184405671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:55.184604 containerd[1466]: time="2025-09-12T17:41:55.184523074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:55.194498 containerd[1466]: time="2025-09-12T17:41:55.194073865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:55.195061 containerd[1466]: time="2025-09-12T17:41:55.194255245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:55.197550 containerd[1466]: time="2025-09-12T17:41:55.196323147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:55.197550 containerd[1466]: time="2025-09-12T17:41:55.196380073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:55.197550 containerd[1466]: time="2025-09-12T17:41:55.196432782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:55.197550 containerd[1466]: time="2025-09-12T17:41:55.196560978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:55.200342 containerd[1466]: time="2025-09-12T17:41:55.200230459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:55.200572 containerd[1466]: time="2025-09-12T17:41:55.200499466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:55.233802 systemd[1]: Started cri-containerd-361650cea44a77d8d170e74df355e73297e936f99e9e15dfbd097ae774c4e148.scope - libcontainer container 361650cea44a77d8d170e74df355e73297e936f99e9e15dfbd097ae774c4e148. Sep 12 17:41:55.241289 systemd[1]: Started cri-containerd-6aedfa56aa4a677976ca1315119bef26abd8cc7cda9a5f5c4f9102d300f0662f.scope - libcontainer container 6aedfa56aa4a677976ca1315119bef26abd8cc7cda9a5f5c4f9102d300f0662f. Sep 12 17:41:55.255275 systemd[1]: Started cri-containerd-a9532608806aa04a577e3b40181ded218927ab30f4706b7b05648e36f846fb6c.scope - libcontainer container a9532608806aa04a577e3b40181ded218927ab30f4706b7b05648e36f846fb6c. Sep 12 17:41:55.332454 containerd[1466]: time="2025-09-12T17:41:55.332300829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal,Uid:b1fa1e1423d024a5e2a95d9e2fc4cd2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aedfa56aa4a677976ca1315119bef26abd8cc7cda9a5f5c4f9102d300f0662f\"" Sep 12 17:41:55.340627 kubelet[2239]: E0912 17:41:55.340484 2239 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-21291" Sep 12 17:41:55.347038 containerd[1466]: time="2025-09-12T17:41:55.346989592Z" level=info msg="CreateContainer within sandbox \"6aedfa56aa4a677976ca1315119bef26abd8cc7cda9a5f5c4f9102d300f0662f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:41:55.367875 kubelet[2239]: E0912 17:41:55.367732 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:41:55.390733 containerd[1466]: time="2025-09-12T17:41:55.390660852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal,Uid:77144eb2856aff280e4fada180634348,Namespace:kube-system,Attempt:0,} returns sandbox id \"361650cea44a77d8d170e74df355e73297e936f99e9e15dfbd097ae774c4e148\"" Sep 12 17:41:55.394656 kubelet[2239]: E0912 17:41:55.394051 2239 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-21291" Sep 12 17:41:55.394782 containerd[1466]: time="2025-09-12T17:41:55.393190379Z" level=info msg="CreateContainer within sandbox \"6aedfa56aa4a677976ca1315119bef26abd8cc7cda9a5f5c4f9102d300f0662f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0d8f8c10ab0714a07cdf922fd32be0e5a99c63d15d1dc8cc38bb5f4e4d1f8c71\"" Sep 12 17:41:55.398414 containerd[1466]: time="2025-09-12T17:41:55.397552597Z" level=info msg="StartContainer for \"0d8f8c10ab0714a07cdf922fd32be0e5a99c63d15d1dc8cc38bb5f4e4d1f8c71\"" Sep 12 17:41:55.401529 containerd[1466]: time="2025-09-12T17:41:55.401494652Z" level=info msg="CreateContainer within sandbox \"361650cea44a77d8d170e74df355e73297e936f99e9e15dfbd097ae774c4e148\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:41:55.406170 containerd[1466]: time="2025-09-12T17:41:55.406136797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal,Uid:898b18e2cf43cca6b203522a285eb54a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9532608806aa04a577e3b40181ded218927ab30f4706b7b05648e36f846fb6c\"" Sep 12 17:41:55.408270 kubelet[2239]: E0912 17:41:55.408241 2239 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flat" Sep 12 17:41:55.413074 containerd[1466]: time="2025-09-12T17:41:55.413041169Z" level=info msg="CreateContainer within sandbox \"a9532608806aa04a577e3b40181ded218927ab30f4706b7b05648e36f846fb6c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:41:55.443790 containerd[1466]: time="2025-09-12T17:41:55.441861529Z" level=info msg="CreateContainer within sandbox \"361650cea44a77d8d170e74df355e73297e936f99e9e15dfbd097ae774c4e148\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b31b1f61333521d608781eba0c32a65ae25dbf7f0aa73c72e6e17d129c7e4a36\"" Sep 12 17:41:55.445130 containerd[1466]: time="2025-09-12T17:41:55.444077464Z" level=info msg="StartContainer for \"b31b1f61333521d608781eba0c32a65ae25dbf7f0aa73c72e6e17d129c7e4a36\"" Sep 12 17:41:55.448448 containerd[1466]: time="2025-09-12T17:41:55.448402556Z" level=info msg="CreateContainer within sandbox \"a9532608806aa04a577e3b40181ded218927ab30f4706b7b05648e36f846fb6c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4b2cfbddd2ca690d4c67cbc3a1269f50aee71ab9fef2982e5eca06979c9916b4\"" Sep 12 17:41:55.449296 containerd[1466]: time="2025-09-12T17:41:55.449260992Z" level=info msg="StartContainer for \"4b2cfbddd2ca690d4c67cbc3a1269f50aee71ab9fef2982e5eca06979c9916b4\"" Sep 12 17:41:55.475785 kubelet[2239]: E0912 17:41:55.475735 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.94:6443: connect: connection refused" interval="1.6s" Sep 12 17:41:55.477362 systemd[1]: Started cri-containerd-0d8f8c10ab0714a07cdf922fd32be0e5a99c63d15d1dc8cc38bb5f4e4d1f8c71.scope - libcontainer container 0d8f8c10ab0714a07cdf922fd32be0e5a99c63d15d1dc8cc38bb5f4e4d1f8c71. Sep 12 17:41:55.520959 kubelet[2239]: E0912 17:41:55.520908 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:41:55.521379 systemd[1]: Started cri-containerd-b31b1f61333521d608781eba0c32a65ae25dbf7f0aa73c72e6e17d129c7e4a36.scope - libcontainer container b31b1f61333521d608781eba0c32a65ae25dbf7f0aa73c72e6e17d129c7e4a36. Sep 12 17:41:55.531322 systemd[1]: Started cri-containerd-4b2cfbddd2ca690d4c67cbc3a1269f50aee71ab9fef2982e5eca06979c9916b4.scope - libcontainer container 4b2cfbddd2ca690d4c67cbc3a1269f50aee71ab9fef2982e5eca06979c9916b4. Sep 12 17:41:55.628209 containerd[1466]: time="2025-09-12T17:41:55.627569945Z" level=info msg="StartContainer for \"b31b1f61333521d608781eba0c32a65ae25dbf7f0aa73c72e6e17d129c7e4a36\" returns successfully" Sep 12 17:41:55.636146 containerd[1466]: time="2025-09-12T17:41:55.635076764Z" level=info msg="StartContainer for \"0d8f8c10ab0714a07cdf922fd32be0e5a99c63d15d1dc8cc38bb5f4e4d1f8c71\" returns successfully" Sep 12 17:41:55.651461 containerd[1466]: time="2025-09-12T17:41:55.651326162Z" level=info msg="StartContainer for \"4b2cfbddd2ca690d4c67cbc3a1269f50aee71ab9fef2982e5eca06979c9916b4\" returns successfully" Sep 12 17:41:55.680655 kubelet[2239]: I0912 17:41:55.680601 2239 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:55.683081 kubelet[2239]: E0912 17:41:55.681067 2239 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.94:6443/api/v1/nodes\": dial tcp 10.128.0.94:6443: connect: connection refused" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:56.125924 kubelet[2239]: E0912 17:41:56.125879 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" not found" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:56.126836 kubelet[2239]: E0912 17:41:56.126801 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" not found" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:56.127408 kubelet[2239]: E0912 17:41:56.127383 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" not found" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:57.133316 kubelet[2239]: E0912 17:41:57.132483 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" not found" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:57.133316 kubelet[2239]: E0912 17:41:57.132919 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" not found" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:57.289978 kubelet[2239]: I0912 17:41:57.289933 2239 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:58.160875 kubelet[2239]: E0912 17:41:58.160815 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" not found" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:58.985458 kubelet[2239]: I0912 17:41:58.985409 2239 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:58.985642 kubelet[2239]: E0912 17:41:58.985485 2239 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\": node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" not found" Sep 12 17:41:59.042613 kubelet[2239]: I0912 17:41:59.042569 2239 apiserver.go:52] "Watching apiserver" Sep 12 17:41:59.061613 kubelet[2239]: E0912 17:41:59.061552 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Sep 12 17:41:59.070806 kubelet[2239]: I0912 17:41:59.070764 2239 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:59.071110 kubelet[2239]: I0912 17:41:59.071072 2239 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:41:59.127267 kubelet[2239]: E0912 17:41:59.127215 2239 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:59.127421 kubelet[2239]: I0912 17:41:59.127295 2239 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:59.134356 kubelet[2239]: E0912 17:41:59.134317 2239 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:59.134356 kubelet[2239]: I0912 17:41:59.134356 2239 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:59.138623 kubelet[2239]: E0912 17:41:59.138585 2239 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:59.550698 kubelet[2239]: I0912 17:41:59.548474 2239 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:41:59.551727 kubelet[2239]: E0912 17:41:59.551500 2239 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:00.006134 update_engine[1450]: I20250912 17:42:00.005142 1450 update_attempter.cc:509] Updating boot flags... Sep 12 17:42:00.115285 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2533) Sep 12 17:42:00.329535 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2529) Sep 12 17:42:01.546037 systemd[1]: Reloading requested from client PID 2544 ('systemctl') (unit session-9.scope)... Sep 12 17:42:01.546058 systemd[1]: Reloading... Sep 12 17:42:01.698123 zram_generator::config[2584]: No configuration found. Sep 12 17:42:01.898141 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:42:02.038016 systemd[1]: Reloading finished in 491 ms. Sep 12 17:42:02.095050 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:42:02.109160 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:42:02.109533 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:02.109605 systemd[1]: kubelet.service: Consumed 1.229s CPU time, 134.7M memory peak, 0B memory swap peak. Sep 12 17:42:02.117512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:42:02.382678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:42:02.398353 (kubelet)[2632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:42:02.471788 kubelet[2632]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:42:02.471788 kubelet[2632]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:42:02.471788 kubelet[2632]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:42:02.472468 kubelet[2632]: I0912 17:42:02.471854 2632 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:42:02.479339 kubelet[2632]: I0912 17:42:02.479294 2632 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:42:02.479339 kubelet[2632]: I0912 17:42:02.479322 2632 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:42:02.479623 kubelet[2632]: I0912 17:42:02.479589 2632 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:42:02.481015 kubelet[2632]: I0912 17:42:02.480978 2632 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:42:02.484209 kubelet[2632]: I0912 17:42:02.484015 2632 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:42:02.489150 kubelet[2632]: E0912 17:42:02.489113 2632 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:42:02.489254 kubelet[2632]: I0912 17:42:02.489151 2632 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:42:02.495476 kubelet[2632]: I0912 17:42:02.494183 2632 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:42:02.495476 kubelet[2632]: I0912 17:42:02.494546 2632 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:42:02.495476 kubelet[2632]: I0912 17:42:02.494575 2632 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:42:02.495476 kubelet[2632]: I0912 17:42:02.494928 2632 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:42:02.498291 kubelet[2632]: I0912 17:42:02.494946 2632 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:42:02.498291 kubelet[2632]: I0912 17:42:02.495020 2632 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:42:02.498291 kubelet[2632]: I0912 17:42:02.495270 2632 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:42:02.498291 kubelet[2632]: I0912 17:42:02.495300 2632 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:42:02.498291 kubelet[2632]: I0912 17:42:02.495336 2632 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:42:02.498291 kubelet[2632]: I0912 17:42:02.495354 2632 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:42:02.501130 kubelet[2632]: I0912 17:42:02.501107 2632 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:42:02.503507 kubelet[2632]: I0912 17:42:02.502127 2632 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:42:02.540083 kubelet[2632]: I0912 17:42:02.540036 2632 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:42:02.540280 kubelet[2632]: I0912 17:42:02.540136 2632 server.go:1289] "Started kubelet" Sep 12 17:42:02.543336 kubelet[2632]: I0912 17:42:02.543230 2632 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:42:02.544162 kubelet[2632]: I0912 17:42:02.544131 2632 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:42:02.544332 kubelet[2632]: I0912 17:42:02.544258 2632 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:42:02.547116 kubelet[2632]: I0912 17:42:02.545753 2632 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:42:02.549327 kubelet[2632]: I0912 17:42:02.549272 2632 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:42:02.557142 kubelet[2632]: I0912 17:42:02.556473 2632 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:42:02.562026 kubelet[2632]: I0912 17:42:02.562003 2632 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:42:02.564607 kubelet[2632]: I0912 17:42:02.563258 2632 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:42:02.564828 kubelet[2632]: I0912 17:42:02.564709 2632 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:42:02.569764 kubelet[2632]: I0912 17:42:02.569663 2632 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:42:02.573390 kubelet[2632]: I0912 17:42:02.573321 2632 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:42:02.573672 kubelet[2632]: E0912 17:42:02.570794 2632 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:42:02.580811 kubelet[2632]: I0912 17:42:02.580786 2632 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:42:02.599877 kubelet[2632]: I0912 17:42:02.599809 2632 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:42:02.609612 kubelet[2632]: I0912 17:42:02.609432 2632 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:42:02.609612 kubelet[2632]: I0912 17:42:02.609462 2632 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:42:02.609612 kubelet[2632]: I0912 17:42:02.609487 2632 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:42:02.609612 kubelet[2632]: I0912 17:42:02.609498 2632 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:42:02.610488 kubelet[2632]: E0912 17:42:02.609556 2632 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:42:02.670794 kubelet[2632]: I0912 17:42:02.669559 2632 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:42:02.670794 kubelet[2632]: I0912 17:42:02.669585 2632 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:42:02.670794 kubelet[2632]: I0912 17:42:02.669636 2632 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:42:02.670794 kubelet[2632]: I0912 17:42:02.669913 2632 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:42:02.670794 kubelet[2632]: I0912 17:42:02.669929 2632 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:42:02.670794 kubelet[2632]: I0912 17:42:02.669972 2632 policy_none.go:49] "None policy: Start" Sep 12 17:42:02.670794 kubelet[2632]: I0912 17:42:02.669986 2632 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:42:02.670794 kubelet[2632]: I0912 17:42:02.670001 2632 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:42:02.670794 kubelet[2632]: I0912 17:42:02.670213 2632 state_mem.go:75] "Updated machine memory state" Sep 12 17:42:02.680697 kubelet[2632]: E0912 17:42:02.680648 2632 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:42:02.680946 kubelet[2632]: I0912 17:42:02.680906 2632 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:42:02.681053 kubelet[2632]: I0912 17:42:02.680923 2632 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:42:02.681736 kubelet[2632]: I0912 17:42:02.681678 2632 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:42:02.686338 kubelet[2632]: E0912 17:42:02.685680 2632 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:42:02.713377 kubelet[2632]: I0912 17:42:02.713343 2632 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:02.713990 kubelet[2632]: I0912 17:42:02.713936 2632 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:02.718308 kubelet[2632]: I0912 17:42:02.717591 2632 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:02.727126 kubelet[2632]: I0912 17:42:02.727079 2632 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Sep 12 17:42:02.728253 kubelet[2632]: I0912 17:42:02.728204 2632 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Sep 12 17:42:02.730478 kubelet[2632]: I0912 17:42:02.730443 2632 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Sep 12 17:42:02.767368 kubelet[2632]: I0912 17:42:02.767324 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/898b18e2cf43cca6b203522a285eb54a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"898b18e2cf43cca6b203522a285eb54a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:02.767845 kubelet[2632]: I0912 17:42:02.767619 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/77144eb2856aff280e4fada180634348-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"77144eb2856aff280e4fada180634348\") " pod="kube-system/kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:02.767845 kubelet[2632]: I0912 17:42:02.767720 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1fa1e1423d024a5e2a95d9e2fc4cd2f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"b1fa1e1423d024a5e2a95d9e2fc4cd2f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:02.767845 kubelet[2632]: I0912 17:42:02.767788 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1fa1e1423d024a5e2a95d9e2fc4cd2f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"b1fa1e1423d024a5e2a95d9e2fc4cd2f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:02.770154 kubelet[2632]: I0912 17:42:02.769976 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/898b18e2cf43cca6b203522a285eb54a-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"898b18e2cf43cca6b203522a285eb54a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:02.770154 kubelet[2632]: I0912 17:42:02.770056 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1fa1e1423d024a5e2a95d9e2fc4cd2f-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"b1fa1e1423d024a5e2a95d9e2fc4cd2f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:02.770154 kubelet[2632]: I0912 17:42:02.770153 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/898b18e2cf43cca6b203522a285eb54a-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"898b18e2cf43cca6b203522a285eb54a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:02.770434 kubelet[2632]: I0912 17:42:02.770183 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/898b18e2cf43cca6b203522a285eb54a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"898b18e2cf43cca6b203522a285eb54a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:02.770434 kubelet[2632]: I0912 17:42:02.770239 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/898b18e2cf43cca6b203522a285eb54a-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" (UID: \"898b18e2cf43cca6b203522a285eb54a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:02.799917 kubelet[2632]: I0912 17:42:02.798804 2632 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:02.813129 kubelet[2632]: I0912 17:42:02.812385 2632 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:02.813129 kubelet[2632]: I0912 17:42:02.812480 2632 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:03.507426 kubelet[2632]: I0912 17:42:03.507011 2632 apiserver.go:52] "Watching apiserver" Sep 12 17:42:03.564785 kubelet[2632]: I0912 17:42:03.564709 2632 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:42:03.631586 kubelet[2632]: I0912 17:42:03.631530 2632 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:03.634440 kubelet[2632]: I0912 17:42:03.634406 2632 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:03.646113 kubelet[2632]: I0912 17:42:03.644014 2632 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Sep 12 17:42:03.646113 kubelet[2632]: E0912 17:42:03.644100 2632 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:03.651507 kubelet[2632]: I0912 17:42:03.651476 2632 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Sep 12 17:42:03.651635 kubelet[2632]: E0912 17:42:03.651534 2632 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:03.705500 kubelet[2632]: I0912 17:42:03.705408 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" podStartSLOduration=1.705386201 podStartE2EDuration="1.705386201s" podCreationTimestamp="2025-09-12 17:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:03.705372157 +0000 UTC m=+1.301015277" watchObservedRunningTime="2025-09-12 17:42:03.705386201 +0000 UTC m=+1.301029318" Sep 12 17:42:03.705725 kubelet[2632]: I0912 17:42:03.705588 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" podStartSLOduration=1.705575064 podStartE2EDuration="1.705575064s" podCreationTimestamp="2025-09-12 17:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:03.68771881 +0000 UTC m=+1.283361930" watchObservedRunningTime="2025-09-12 17:42:03.705575064 +0000 UTC m=+1.301218181" Sep 12 17:42:03.719820 kubelet[2632]: I0912 17:42:03.719630 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" podStartSLOduration=1.719612433 podStartE2EDuration="1.719612433s" podCreationTimestamp="2025-09-12 17:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:03.719230677 +0000 UTC m=+1.314873795" watchObservedRunningTime="2025-09-12 17:42:03.719612433 +0000 UTC m=+1.315255553" Sep 12 17:42:06.692450 kubelet[2632]: I0912 17:42:06.692409 2632 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:42:06.693203 containerd[1466]: time="2025-09-12T17:42:06.692985483Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:42:06.693679 kubelet[2632]: I0912 17:42:06.693259 2632 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:42:07.584161 systemd[1]: Created slice kubepods-besteffort-podca44bbc5_dc00_41a0_9d7d_4bbd6eabb1b8.slice - libcontainer container kubepods-besteffort-podca44bbc5_dc00_41a0_9d7d_4bbd6eabb1b8.slice. Sep 12 17:42:07.601842 kubelet[2632]: I0912 17:42:07.601308 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca44bbc5-dc00-41a0-9d7d-4bbd6eabb1b8-xtables-lock\") pod \"kube-proxy-dkb8k\" (UID: \"ca44bbc5-dc00-41a0-9d7d-4bbd6eabb1b8\") " pod="kube-system/kube-proxy-dkb8k" Sep 12 17:42:07.601842 kubelet[2632]: I0912 17:42:07.601360 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca44bbc5-dc00-41a0-9d7d-4bbd6eabb1b8-lib-modules\") pod \"kube-proxy-dkb8k\" (UID: \"ca44bbc5-dc00-41a0-9d7d-4bbd6eabb1b8\") " pod="kube-system/kube-proxy-dkb8k" Sep 12 17:42:07.601842 kubelet[2632]: I0912 17:42:07.601396 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgplb\" (UniqueName: \"kubernetes.io/projected/ca44bbc5-dc00-41a0-9d7d-4bbd6eabb1b8-kube-api-access-sgplb\") pod \"kube-proxy-dkb8k\" (UID: \"ca44bbc5-dc00-41a0-9d7d-4bbd6eabb1b8\") " pod="kube-system/kube-proxy-dkb8k" Sep 12 17:42:07.601842 kubelet[2632]: I0912 17:42:07.601427 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca44bbc5-dc00-41a0-9d7d-4bbd6eabb1b8-kube-proxy\") pod \"kube-proxy-dkb8k\" (UID: \"ca44bbc5-dc00-41a0-9d7d-4bbd6eabb1b8\") " pod="kube-system/kube-proxy-dkb8k" Sep 12 17:42:07.757141 systemd[1]: Created slice kubepods-besteffort-pod7a8180fa_2f5c_41ff_9197_0dee027599c9.slice - libcontainer container kubepods-besteffort-pod7a8180fa_2f5c_41ff_9197_0dee027599c9.slice. Sep 12 17:42:07.802235 kubelet[2632]: I0912 17:42:07.802170 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a8180fa-2f5c-41ff-9197-0dee027599c9-var-lib-calico\") pod \"tigera-operator-755d956888-ztnp9\" (UID: \"7a8180fa-2f5c-41ff-9197-0dee027599c9\") " pod="tigera-operator/tigera-operator-755d956888-ztnp9" Sep 12 17:42:07.802235 kubelet[2632]: I0912 17:42:07.802236 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv8fx\" (UniqueName: \"kubernetes.io/projected/7a8180fa-2f5c-41ff-9197-0dee027599c9-kube-api-access-jv8fx\") pod \"tigera-operator-755d956888-ztnp9\" (UID: \"7a8180fa-2f5c-41ff-9197-0dee027599c9\") " pod="tigera-operator/tigera-operator-755d956888-ztnp9" Sep 12 17:42:07.896371 containerd[1466]: time="2025-09-12T17:42:07.896283963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dkb8k,Uid:ca44bbc5-dc00-41a0-9d7d-4bbd6eabb1b8,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:07.934874 containerd[1466]: time="2025-09-12T17:42:07.934766589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:07.935637 containerd[1466]: time="2025-09-12T17:42:07.935078431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:07.935637 containerd[1466]: time="2025-09-12T17:42:07.935230089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:07.937246 containerd[1466]: time="2025-09-12T17:42:07.936461759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:07.971284 systemd[1]: Started cri-containerd-815f4885497192e659c722ef80c6062d264b06b402c689ae50f53f633bf1161a.scope - libcontainer container 815f4885497192e659c722ef80c6062d264b06b402c689ae50f53f633bf1161a. Sep 12 17:42:08.002861 containerd[1466]: time="2025-09-12T17:42:08.002802250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dkb8k,Uid:ca44bbc5-dc00-41a0-9d7d-4bbd6eabb1b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"815f4885497192e659c722ef80c6062d264b06b402c689ae50f53f633bf1161a\"" Sep 12 17:42:08.009456 containerd[1466]: time="2025-09-12T17:42:08.009392926Z" level=info msg="CreateContainer within sandbox \"815f4885497192e659c722ef80c6062d264b06b402c689ae50f53f633bf1161a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:42:08.024596 containerd[1466]: time="2025-09-12T17:42:08.024541229Z" level=info msg="CreateContainer within sandbox \"815f4885497192e659c722ef80c6062d264b06b402c689ae50f53f633bf1161a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2396982281b80d391979998ce5709ffac54b35205f7d5b8900cfa96715ea1a62\"" Sep 12 17:42:08.025311 containerd[1466]: time="2025-09-12T17:42:08.025272673Z" level=info msg="StartContainer for \"2396982281b80d391979998ce5709ffac54b35205f7d5b8900cfa96715ea1a62\"" Sep 12 17:42:08.061684 systemd[1]: Started cri-containerd-2396982281b80d391979998ce5709ffac54b35205f7d5b8900cfa96715ea1a62.scope - libcontainer container 2396982281b80d391979998ce5709ffac54b35205f7d5b8900cfa96715ea1a62. Sep 12 17:42:08.062844 containerd[1466]: time="2025-09-12T17:42:08.062223120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-ztnp9,Uid:7a8180fa-2f5c-41ff-9197-0dee027599c9,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:42:08.105986 containerd[1466]: time="2025-09-12T17:42:08.105577475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:08.105986 containerd[1466]: time="2025-09-12T17:42:08.105685610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:08.105986 containerd[1466]: time="2025-09-12T17:42:08.105710719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:08.105986 containerd[1466]: time="2025-09-12T17:42:08.105861289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:08.126735 containerd[1466]: time="2025-09-12T17:42:08.125805760Z" level=info msg="StartContainer for \"2396982281b80d391979998ce5709ffac54b35205f7d5b8900cfa96715ea1a62\" returns successfully" Sep 12 17:42:08.151673 systemd[1]: Started cri-containerd-4071c4d43d6c09b57168c8eddc8d09ad84e18c1424924bdff89f424c54be1820.scope - libcontainer container 4071c4d43d6c09b57168c8eddc8d09ad84e18c1424924bdff89f424c54be1820. Sep 12 17:42:08.232553 containerd[1466]: time="2025-09-12T17:42:08.232435556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-ztnp9,Uid:7a8180fa-2f5c-41ff-9197-0dee027599c9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4071c4d43d6c09b57168c8eddc8d09ad84e18c1424924bdff89f424c54be1820\"" Sep 12 17:42:08.238683 containerd[1466]: time="2025-09-12T17:42:08.238389461Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:42:08.669541 kubelet[2632]: I0912 17:42:08.669471 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dkb8k" podStartSLOduration=1.6694520769999999 podStartE2EDuration="1.669452077s" podCreationTimestamp="2025-09-12 17:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:08.657948217 +0000 UTC m=+6.253591334" watchObservedRunningTime="2025-09-12 17:42:08.669452077 +0000 UTC m=+6.265095193" Sep 12 17:42:09.437986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount522471525.mount: Deactivated successfully. Sep 12 17:42:10.400537 containerd[1466]: time="2025-09-12T17:42:10.400471723Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:10.402057 containerd[1466]: time="2025-09-12T17:42:10.401867170Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:42:10.404543 containerd[1466]: time="2025-09-12T17:42:10.403058102Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:10.406344 containerd[1466]: time="2025-09-12T17:42:10.406306798Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:10.407509 containerd[1466]: time="2025-09-12T17:42:10.407421461Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.168926533s" Sep 12 17:42:10.407509 containerd[1466]: time="2025-09-12T17:42:10.407472009Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:42:10.413227 containerd[1466]: time="2025-09-12T17:42:10.413175603Z" level=info msg="CreateContainer within sandbox \"4071c4d43d6c09b57168c8eddc8d09ad84e18c1424924bdff89f424c54be1820\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:42:10.430493 containerd[1466]: time="2025-09-12T17:42:10.430447882Z" level=info msg="CreateContainer within sandbox \"4071c4d43d6c09b57168c8eddc8d09ad84e18c1424924bdff89f424c54be1820\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"aa4766875cf415d4dad5c9cbb679024ae9ee014b65a6a5a67969779450beb17e\"" Sep 12 17:42:10.431404 containerd[1466]: time="2025-09-12T17:42:10.431225876Z" level=info msg="StartContainer for \"aa4766875cf415d4dad5c9cbb679024ae9ee014b65a6a5a67969779450beb17e\"" Sep 12 17:42:10.480908 systemd[1]: run-containerd-runc-k8s.io-aa4766875cf415d4dad5c9cbb679024ae9ee014b65a6a5a67969779450beb17e-runc.UpY5zh.mount: Deactivated successfully. Sep 12 17:42:10.494301 systemd[1]: Started cri-containerd-aa4766875cf415d4dad5c9cbb679024ae9ee014b65a6a5a67969779450beb17e.scope - libcontainer container aa4766875cf415d4dad5c9cbb679024ae9ee014b65a6a5a67969779450beb17e. Sep 12 17:42:10.530854 containerd[1466]: time="2025-09-12T17:42:10.530703805Z" level=info msg="StartContainer for \"aa4766875cf415d4dad5c9cbb679024ae9ee014b65a6a5a67969779450beb17e\" returns successfully" Sep 12 17:42:17.481260 sudo[1742]: pam_unix(sudo:session): session closed for user root Sep 12 17:42:17.539901 sshd[1739]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:17.553321 systemd[1]: sshd@8-10.128.0.94:22-139.178.89.65:37894.service: Deactivated successfully. Sep 12 17:42:17.561672 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:42:17.563397 systemd[1]: session-9.scope: Consumed 6.498s CPU time, 161.1M memory peak, 0B memory swap peak. Sep 12 17:42:17.565710 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:42:17.570309 systemd-logind[1445]: Removed session 9. Sep 12 17:42:22.769723 kubelet[2632]: I0912 17:42:22.768565 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-ztnp9" podStartSLOduration=13.594808675 podStartE2EDuration="15.768516177s" podCreationTimestamp="2025-09-12 17:42:07 +0000 UTC" firstStartedPulling="2025-09-12 17:42:08.235301831 +0000 UTC m=+5.830944932" lastFinishedPulling="2025-09-12 17:42:10.409009338 +0000 UTC m=+8.004652434" observedRunningTime="2025-09-12 17:42:10.657597137 +0000 UTC m=+8.253240255" watchObservedRunningTime="2025-09-12 17:42:22.768516177 +0000 UTC m=+20.364159294" Sep 12 17:42:22.798977 systemd[1]: Created slice kubepods-besteffort-pod88653071_9e84_49d9_ab9c_2701750eee68.slice - libcontainer container kubepods-besteffort-pod88653071_9e84_49d9_ab9c_2701750eee68.slice. Sep 12 17:42:22.807125 kubelet[2632]: I0912 17:42:22.805753 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/88653071-9e84-49d9-ab9c-2701750eee68-typha-certs\") pod \"calico-typha-f9cb86746-qkfrk\" (UID: \"88653071-9e84-49d9-ab9c-2701750eee68\") " pod="calico-system/calico-typha-f9cb86746-qkfrk" Sep 12 17:42:22.807125 kubelet[2632]: I0912 17:42:22.805817 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88653071-9e84-49d9-ab9c-2701750eee68-tigera-ca-bundle\") pod \"calico-typha-f9cb86746-qkfrk\" (UID: \"88653071-9e84-49d9-ab9c-2701750eee68\") " pod="calico-system/calico-typha-f9cb86746-qkfrk" Sep 12 17:42:22.807125 kubelet[2632]: I0912 17:42:22.805854 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfhdl\" (UniqueName: \"kubernetes.io/projected/88653071-9e84-49d9-ab9c-2701750eee68-kube-api-access-kfhdl\") pod \"calico-typha-f9cb86746-qkfrk\" (UID: \"88653071-9e84-49d9-ab9c-2701750eee68\") " pod="calico-system/calico-typha-f9cb86746-qkfrk" Sep 12 17:42:22.995833 systemd[1]: Created slice kubepods-besteffort-pod1e763e42_0c1a_472e_8fbb_2a2ed745936c.slice - libcontainer container kubepods-besteffort-pod1e763e42_0c1a_472e_8fbb_2a2ed745936c.slice. Sep 12 17:42:23.007233 kubelet[2632]: I0912 17:42:23.006742 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1e763e42-0c1a-472e-8fbb-2a2ed745936c-var-run-calico\") pod \"calico-node-wxbgw\" (UID: \"1e763e42-0c1a-472e-8fbb-2a2ed745936c\") " pod="calico-system/calico-node-wxbgw" Sep 12 17:42:23.007233 kubelet[2632]: I0912 17:42:23.006789 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e763e42-0c1a-472e-8fbb-2a2ed745936c-tigera-ca-bundle\") pod \"calico-node-wxbgw\" (UID: \"1e763e42-0c1a-472e-8fbb-2a2ed745936c\") " pod="calico-system/calico-node-wxbgw" Sep 12 17:42:23.007233 kubelet[2632]: I0912 17:42:23.006819 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e763e42-0c1a-472e-8fbb-2a2ed745936c-var-lib-calico\") pod \"calico-node-wxbgw\" (UID: \"1e763e42-0c1a-472e-8fbb-2a2ed745936c\") " pod="calico-system/calico-node-wxbgw" Sep 12 17:42:23.007233 kubelet[2632]: I0912 17:42:23.006846 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1e763e42-0c1a-472e-8fbb-2a2ed745936c-node-certs\") pod \"calico-node-wxbgw\" (UID: \"1e763e42-0c1a-472e-8fbb-2a2ed745936c\") " pod="calico-system/calico-node-wxbgw" Sep 12 17:42:23.007233 kubelet[2632]: I0912 17:42:23.006878 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1e763e42-0c1a-472e-8fbb-2a2ed745936c-cni-log-dir\") pod \"calico-node-wxbgw\" (UID: \"1e763e42-0c1a-472e-8fbb-2a2ed745936c\") " pod="calico-system/calico-node-wxbgw" Sep 12 17:42:23.007574 kubelet[2632]: I0912 17:42:23.006904 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1e763e42-0c1a-472e-8fbb-2a2ed745936c-flexvol-driver-host\") pod \"calico-node-wxbgw\" (UID: \"1e763e42-0c1a-472e-8fbb-2a2ed745936c\") " pod="calico-system/calico-node-wxbgw" Sep 12 17:42:23.007574 kubelet[2632]: I0912 17:42:23.006934 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1e763e42-0c1a-472e-8fbb-2a2ed745936c-cni-net-dir\") pod \"calico-node-wxbgw\" (UID: \"1e763e42-0c1a-472e-8fbb-2a2ed745936c\") " pod="calico-system/calico-node-wxbgw" Sep 12 17:42:23.007574 kubelet[2632]: I0912 17:42:23.006959 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e763e42-0c1a-472e-8fbb-2a2ed745936c-lib-modules\") pod \"calico-node-wxbgw\" (UID: \"1e763e42-0c1a-472e-8fbb-2a2ed745936c\") " pod="calico-system/calico-node-wxbgw" Sep 12 17:42:23.007574 kubelet[2632]: I0912 17:42:23.006988 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e763e42-0c1a-472e-8fbb-2a2ed745936c-xtables-lock\") pod \"calico-node-wxbgw\" (UID: \"1e763e42-0c1a-472e-8fbb-2a2ed745936c\") " pod="calico-system/calico-node-wxbgw" Sep 12 17:42:23.007574 kubelet[2632]: I0912 17:42:23.007015 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1e763e42-0c1a-472e-8fbb-2a2ed745936c-policysync\") pod \"calico-node-wxbgw\" (UID: \"1e763e42-0c1a-472e-8fbb-2a2ed745936c\") " pod="calico-system/calico-node-wxbgw" Sep 12 17:42:23.007846 kubelet[2632]: I0912 17:42:23.007046 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1e763e42-0c1a-472e-8fbb-2a2ed745936c-cni-bin-dir\") pod \"calico-node-wxbgw\" (UID: \"1e763e42-0c1a-472e-8fbb-2a2ed745936c\") " pod="calico-system/calico-node-wxbgw" Sep 12 17:42:23.007846 kubelet[2632]: I0912 17:42:23.007072 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tffqt\" (UniqueName: \"kubernetes.io/projected/1e763e42-0c1a-472e-8fbb-2a2ed745936c-kube-api-access-tffqt\") pod \"calico-node-wxbgw\" (UID: \"1e763e42-0c1a-472e-8fbb-2a2ed745936c\") " pod="calico-system/calico-node-wxbgw" Sep 12 17:42:23.111570 kubelet[2632]: E0912 17:42:23.110809 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.111570 kubelet[2632]: W0912 17:42:23.110839 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.111570 kubelet[2632]: E0912 17:42:23.110877 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.115636 kubelet[2632]: E0912 17:42:23.115234 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.115636 kubelet[2632]: W0912 17:42:23.115262 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.115636 kubelet[2632]: E0912 17:42:23.115288 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.117652 containerd[1466]: time="2025-09-12T17:42:23.117270828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f9cb86746-qkfrk,Uid:88653071-9e84-49d9-ab9c-2701750eee68,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:23.121927 kubelet[2632]: E0912 17:42:23.120249 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.121927 kubelet[2632]: W0912 17:42:23.120275 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.121927 kubelet[2632]: E0912 17:42:23.120296 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.125522 kubelet[2632]: E0912 17:42:23.122532 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.125522 kubelet[2632]: W0912 17:42:23.122556 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.126234 kubelet[2632]: E0912 17:42:23.122641 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.126356 kubelet[2632]: E0912 17:42:23.126253 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.126356 kubelet[2632]: W0912 17:42:23.126269 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.126356 kubelet[2632]: E0912 17:42:23.126288 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.131340 kubelet[2632]: E0912 17:42:23.128142 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.131340 kubelet[2632]: W0912 17:42:23.128163 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.131340 kubelet[2632]: E0912 17:42:23.128229 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.134441 kubelet[2632]: E0912 17:42:23.134399 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.134441 kubelet[2632]: W0912 17:42:23.134423 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.134643 kubelet[2632]: E0912 17:42:23.134444 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.135272 kubelet[2632]: E0912 17:42:23.135245 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.135272 kubelet[2632]: W0912 17:42:23.135271 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.135463 kubelet[2632]: E0912 17:42:23.135290 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.137743 kubelet[2632]: E0912 17:42:23.137715 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.137743 kubelet[2632]: W0912 17:42:23.137740 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.137912 kubelet[2632]: E0912 17:42:23.137757 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.145586 kubelet[2632]: E0912 17:42:23.145560 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.145586 kubelet[2632]: W0912 17:42:23.145584 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.145765 kubelet[2632]: E0912 17:42:23.145603 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.208265 containerd[1466]: time="2025-09-12T17:42:23.207751709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:23.208265 containerd[1466]: time="2025-09-12T17:42:23.207874423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:23.208265 containerd[1466]: time="2025-09-12T17:42:23.207954333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:23.208539 containerd[1466]: time="2025-09-12T17:42:23.208352214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:23.232127 kubelet[2632]: E0912 17:42:23.231822 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jcz47" podUID="6f422465-d46f-4508-ba40-fe8c850f3aa6" Sep 12 17:42:23.258079 systemd[1]: Started cri-containerd-51763c0b8eff1a46d9af59db0eaa84c9cd6cc2d97dffef1f8232ac71022e9c1d.scope - libcontainer container 51763c0b8eff1a46d9af59db0eaa84c9cd6cc2d97dffef1f8232ac71022e9c1d. Sep 12 17:42:23.298449 kubelet[2632]: E0912 17:42:23.298041 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.298449 kubelet[2632]: W0912 17:42:23.298070 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.298449 kubelet[2632]: E0912 17:42:23.298129 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.299569 kubelet[2632]: E0912 17:42:23.298825 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.299569 kubelet[2632]: W0912 17:42:23.298872 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.299569 kubelet[2632]: E0912 17:42:23.298893 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.300937 kubelet[2632]: E0912 17:42:23.300303 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.300937 kubelet[2632]: W0912 17:42:23.300659 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.300937 kubelet[2632]: E0912 17:42:23.300683 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.302235 kubelet[2632]: E0912 17:42:23.302171 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.302547 kubelet[2632]: W0912 17:42:23.302294 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.302547 kubelet[2632]: E0912 17:42:23.302317 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.304824 kubelet[2632]: E0912 17:42:23.303640 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.304824 kubelet[2632]: W0912 17:42:23.303659 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.304824 kubelet[2632]: E0912 17:42:23.303705 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.307081 kubelet[2632]: E0912 17:42:23.306678 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.307081 kubelet[2632]: W0912 17:42:23.306705 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.307081 kubelet[2632]: E0912 17:42:23.306742 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.309698 kubelet[2632]: E0912 17:42:23.309428 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.309698 kubelet[2632]: W0912 17:42:23.309470 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.309698 kubelet[2632]: E0912 17:42:23.309491 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.310536 kubelet[2632]: E0912 17:42:23.310236 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.310536 kubelet[2632]: W0912 17:42:23.310255 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.310536 kubelet[2632]: E0912 17:42:23.310273 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.313582 kubelet[2632]: E0912 17:42:23.313274 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.313582 kubelet[2632]: W0912 17:42:23.313295 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.314045 kubelet[2632]: E0912 17:42:23.313839 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.315234 containerd[1466]: time="2025-09-12T17:42:23.314766445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wxbgw,Uid:1e763e42-0c1a-472e-8fbb-2a2ed745936c,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:23.315590 kubelet[2632]: E0912 17:42:23.315417 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.315590 kubelet[2632]: W0912 17:42:23.315434 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.315590 kubelet[2632]: E0912 17:42:23.315451 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.316452 kubelet[2632]: E0912 17:42:23.316385 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.316452 kubelet[2632]: W0912 17:42:23.316405 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.316452 kubelet[2632]: E0912 17:42:23.316421 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.318842 kubelet[2632]: E0912 17:42:23.318155 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.318842 kubelet[2632]: W0912 17:42:23.318174 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.318842 kubelet[2632]: E0912 17:42:23.318191 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.319922 kubelet[2632]: E0912 17:42:23.319393 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.319922 kubelet[2632]: W0912 17:42:23.319412 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.319922 kubelet[2632]: E0912 17:42:23.319430 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.319922 kubelet[2632]: E0912 17:42:23.319750 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.319922 kubelet[2632]: W0912 17:42:23.319764 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.319922 kubelet[2632]: E0912 17:42:23.319780 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.321449 kubelet[2632]: E0912 17:42:23.321400 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.321637 kubelet[2632]: W0912 17:42:23.321529 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.321637 kubelet[2632]: E0912 17:42:23.321553 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.322220 kubelet[2632]: E0912 17:42:23.322199 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.322439 kubelet[2632]: W0912 17:42:23.322349 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.322439 kubelet[2632]: E0912 17:42:23.322375 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.323134 kubelet[2632]: E0912 17:42:23.322926 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.323134 kubelet[2632]: W0912 17:42:23.322944 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.323134 kubelet[2632]: E0912 17:42:23.322961 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.323645 kubelet[2632]: E0912 17:42:23.323517 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.323937 kubelet[2632]: W0912 17:42:23.323733 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.323937 kubelet[2632]: E0912 17:42:23.323757 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.324613 kubelet[2632]: E0912 17:42:23.324486 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.324613 kubelet[2632]: W0912 17:42:23.324530 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.324613 kubelet[2632]: E0912 17:42:23.324549 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.326439 kubelet[2632]: E0912 17:42:23.326061 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.326439 kubelet[2632]: W0912 17:42:23.326082 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.326439 kubelet[2632]: E0912 17:42:23.326117 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.326775 kubelet[2632]: E0912 17:42:23.326583 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.326775 kubelet[2632]: W0912 17:42:23.326599 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.326775 kubelet[2632]: E0912 17:42:23.326615 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.326775 kubelet[2632]: I0912 17:42:23.326650 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6f422465-d46f-4508-ba40-fe8c850f3aa6-registration-dir\") pod \"csi-node-driver-jcz47\" (UID: \"6f422465-d46f-4508-ba40-fe8c850f3aa6\") " pod="calico-system/csi-node-driver-jcz47" Sep 12 17:42:23.328043 kubelet[2632]: E0912 17:42:23.327040 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.328043 kubelet[2632]: W0912 17:42:23.327057 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.328043 kubelet[2632]: E0912 17:42:23.327076 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.328043 kubelet[2632]: I0912 17:42:23.327122 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq7t2\" (UniqueName: \"kubernetes.io/projected/6f422465-d46f-4508-ba40-fe8c850f3aa6-kube-api-access-cq7t2\") pod \"csi-node-driver-jcz47\" (UID: \"6f422465-d46f-4508-ba40-fe8c850f3aa6\") " pod="calico-system/csi-node-driver-jcz47" Sep 12 17:42:23.328043 kubelet[2632]: E0912 17:42:23.327427 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.328043 kubelet[2632]: W0912 17:42:23.327453 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.328043 kubelet[2632]: E0912 17:42:23.327470 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.328043 kubelet[2632]: I0912 17:42:23.327496 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6f422465-d46f-4508-ba40-fe8c850f3aa6-socket-dir\") pod \"csi-node-driver-jcz47\" (UID: \"6f422465-d46f-4508-ba40-fe8c850f3aa6\") " pod="calico-system/csi-node-driver-jcz47" Sep 12 17:42:23.328043 kubelet[2632]: E0912 17:42:23.327928 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.328906 kubelet[2632]: W0912 17:42:23.327944 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.328906 kubelet[2632]: E0912 17:42:23.327960 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.328906 kubelet[2632]: I0912 17:42:23.328860 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6f422465-d46f-4508-ba40-fe8c850f3aa6-varrun\") pod \"csi-node-driver-jcz47\" (UID: \"6f422465-d46f-4508-ba40-fe8c850f3aa6\") " pod="calico-system/csi-node-driver-jcz47" Sep 12 17:42:23.329665 kubelet[2632]: E0912 17:42:23.329640 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.329665 kubelet[2632]: W0912 17:42:23.329665 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.330026 kubelet[2632]: E0912 17:42:23.329686 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.332222 kubelet[2632]: E0912 17:42:23.332197 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.332222 kubelet[2632]: W0912 17:42:23.332220 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.332389 kubelet[2632]: E0912 17:42:23.332238 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.332828 kubelet[2632]: E0912 17:42:23.332788 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.332828 kubelet[2632]: W0912 17:42:23.332811 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.332828 kubelet[2632]: E0912 17:42:23.332828 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.333426 kubelet[2632]: E0912 17:42:23.333180 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.333426 kubelet[2632]: W0912 17:42:23.333196 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.333426 kubelet[2632]: E0912 17:42:23.333212 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.333919 kubelet[2632]: E0912 17:42:23.333755 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.333919 kubelet[2632]: W0912 17:42:23.333771 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.333919 kubelet[2632]: E0912 17:42:23.333787 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.334421 kubelet[2632]: I0912 17:42:23.334348 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f422465-d46f-4508-ba40-fe8c850f3aa6-kubelet-dir\") pod \"csi-node-driver-jcz47\" (UID: \"6f422465-d46f-4508-ba40-fe8c850f3aa6\") " pod="calico-system/csi-node-driver-jcz47" Sep 12 17:42:23.336217 kubelet[2632]: E0912 17:42:23.335527 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.336738 kubelet[2632]: W0912 17:42:23.336342 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.336738 kubelet[2632]: E0912 17:42:23.336372 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.337598 kubelet[2632]: E0912 17:42:23.337576 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.337763 kubelet[2632]: W0912 17:42:23.337729 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.338123 kubelet[2632]: E0912 17:42:23.337988 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.338773 kubelet[2632]: E0912 17:42:23.338649 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.338773 kubelet[2632]: W0912 17:42:23.338664 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.338773 kubelet[2632]: E0912 17:42:23.338680 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.340397 kubelet[2632]: E0912 17:42:23.340264 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.340397 kubelet[2632]: W0912 17:42:23.340283 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.340397 kubelet[2632]: E0912 17:42:23.340301 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.341383 kubelet[2632]: E0912 17:42:23.341120 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.341383 kubelet[2632]: W0912 17:42:23.341177 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.341383 kubelet[2632]: E0912 17:42:23.341196 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.342123 kubelet[2632]: E0912 17:42:23.341862 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.342123 kubelet[2632]: W0912 17:42:23.341882 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.342123 kubelet[2632]: E0912 17:42:23.341900 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.411434 containerd[1466]: time="2025-09-12T17:42:23.411307383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:23.411648 containerd[1466]: time="2025-09-12T17:42:23.411404614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:23.411648 containerd[1466]: time="2025-09-12T17:42:23.411440366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:23.411648 containerd[1466]: time="2025-09-12T17:42:23.411562895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:23.444347 kubelet[2632]: E0912 17:42:23.443603 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.444347 kubelet[2632]: W0912 17:42:23.444286 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.445117 kubelet[2632]: E0912 17:42:23.444640 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.445632 kubelet[2632]: E0912 17:42:23.445553 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.445632 kubelet[2632]: W0912 17:42:23.445590 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.445632 kubelet[2632]: E0912 17:42:23.445612 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.448749 kubelet[2632]: E0912 17:42:23.447742 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.448749 kubelet[2632]: W0912 17:42:23.447766 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.448749 kubelet[2632]: E0912 17:42:23.447797 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.448749 kubelet[2632]: E0912 17:42:23.448220 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.448749 kubelet[2632]: W0912 17:42:23.448254 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.448749 kubelet[2632]: E0912 17:42:23.448273 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.448749 kubelet[2632]: E0912 17:42:23.448696 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.448749 kubelet[2632]: W0912 17:42:23.448720 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.449758 kubelet[2632]: E0912 17:42:23.449251 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.450311 kubelet[2632]: E0912 17:42:23.449939 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.450311 kubelet[2632]: W0912 17:42:23.449958 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.450311 kubelet[2632]: E0912 17:42:23.449996 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.450821 kubelet[2632]: E0912 17:42:23.450770 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.451189 kubelet[2632]: W0912 17:42:23.450979 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.451189 kubelet[2632]: E0912 17:42:23.451061 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.451798 kubelet[2632]: E0912 17:42:23.451778 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.452359 kubelet[2632]: W0912 17:42:23.452236 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.452359 kubelet[2632]: E0912 17:42:23.452267 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.454293 kubelet[2632]: E0912 17:42:23.453399 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.454293 kubelet[2632]: W0912 17:42:23.453436 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.454293 kubelet[2632]: E0912 17:42:23.453454 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.455415 kubelet[2632]: E0912 17:42:23.455394 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.455702 kubelet[2632]: W0912 17:42:23.455513 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.455702 kubelet[2632]: E0912 17:42:23.455538 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.456305 kubelet[2632]: E0912 17:42:23.456149 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.456305 kubelet[2632]: W0912 17:42:23.456169 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.456305 kubelet[2632]: E0912 17:42:23.456186 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.458150 kubelet[2632]: E0912 17:42:23.457546 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.458150 kubelet[2632]: W0912 17:42:23.457567 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.458150 kubelet[2632]: E0912 17:42:23.457584 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.458897 kubelet[2632]: E0912 17:42:23.458578 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.458897 kubelet[2632]: W0912 17:42:23.458599 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.458897 kubelet[2632]: E0912 17:42:23.458642 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.461673 kubelet[2632]: E0912 17:42:23.461344 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.461673 kubelet[2632]: W0912 17:42:23.461364 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.461673 kubelet[2632]: E0912 17:42:23.461382 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.463056 kubelet[2632]: E0912 17:42:23.462150 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.463056 kubelet[2632]: W0912 17:42:23.462168 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.463056 kubelet[2632]: E0912 17:42:23.462186 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.463868 kubelet[2632]: E0912 17:42:23.463531 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.463868 kubelet[2632]: W0912 17:42:23.463550 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.463868 kubelet[2632]: E0912 17:42:23.463568 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.464838 kubelet[2632]: E0912 17:42:23.464541 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.464838 kubelet[2632]: W0912 17:42:23.464562 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.464838 kubelet[2632]: E0912 17:42:23.464580 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.468408 kubelet[2632]: E0912 17:42:23.465392 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.468408 kubelet[2632]: W0912 17:42:23.465410 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.468408 kubelet[2632]: E0912 17:42:23.465427 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.468709 kubelet[2632]: E0912 17:42:23.468690 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.474899 kubelet[2632]: W0912 17:42:23.471356 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.474899 kubelet[2632]: E0912 17:42:23.471388 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.474899 kubelet[2632]: E0912 17:42:23.472549 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.474899 kubelet[2632]: W0912 17:42:23.472565 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.474899 kubelet[2632]: E0912 17:42:23.472582 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.475458 kubelet[2632]: E0912 17:42:23.475437 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.475978 kubelet[2632]: W0912 17:42:23.475800 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.475978 kubelet[2632]: E0912 17:42:23.475827 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.476411 kubelet[2632]: E0912 17:42:23.476391 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.476530 kubelet[2632]: W0912 17:42:23.476511 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.476626 kubelet[2632]: E0912 17:42:23.476610 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.477310 kubelet[2632]: E0912 17:42:23.477290 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.477459 kubelet[2632]: W0912 17:42:23.477437 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.477603 kubelet[2632]: E0912 17:42:23.477585 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.478238 kubelet[2632]: E0912 17:42:23.478186 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.478503 systemd[1]: Started cri-containerd-a78c30c816076db20af9185721ca81359ff9c94ec3423f6ffc84ea45e2ce06af.scope - libcontainer container a78c30c816076db20af9185721ca81359ff9c94ec3423f6ffc84ea45e2ce06af. Sep 12 17:42:23.480528 kubelet[2632]: W0912 17:42:23.480318 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.480528 kubelet[2632]: E0912 17:42:23.480349 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.481507 kubelet[2632]: E0912 17:42:23.481382 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.481507 kubelet[2632]: W0912 17:42:23.481431 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.481507 kubelet[2632]: E0912 17:42:23.481452 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.487511 kubelet[2632]: E0912 17:42:23.487172 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:23.487511 kubelet[2632]: W0912 17:42:23.487191 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:23.487511 kubelet[2632]: E0912 17:42:23.487209 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:23.568431 containerd[1466]: time="2025-09-12T17:42:23.568051165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f9cb86746-qkfrk,Uid:88653071-9e84-49d9-ab9c-2701750eee68,Namespace:calico-system,Attempt:0,} returns sandbox id \"51763c0b8eff1a46d9af59db0eaa84c9cd6cc2d97dffef1f8232ac71022e9c1d\"" Sep 12 17:42:23.572751 containerd[1466]: time="2025-09-12T17:42:23.572454611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:42:23.607449 containerd[1466]: time="2025-09-12T17:42:23.607268569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wxbgw,Uid:1e763e42-0c1a-472e-8fbb-2a2ed745936c,Namespace:calico-system,Attempt:0,} returns sandbox id \"a78c30c816076db20af9185721ca81359ff9c94ec3423f6ffc84ea45e2ce06af\"" Sep 12 17:42:24.618952 kubelet[2632]: E0912 17:42:24.618250 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jcz47" podUID="6f422465-d46f-4508-ba40-fe8c850f3aa6" Sep 12 17:42:24.725434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2559942338.mount: Deactivated successfully. Sep 12 17:42:26.070204 containerd[1466]: time="2025-09-12T17:42:26.070143056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:26.071580 containerd[1466]: time="2025-09-12T17:42:26.071355588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:42:26.073413 containerd[1466]: time="2025-09-12T17:42:26.072522852Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:26.076414 containerd[1466]: time="2025-09-12T17:42:26.076348530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:26.077793 containerd[1466]: time="2025-09-12T17:42:26.077207454Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.504705144s" Sep 12 17:42:26.077793 containerd[1466]: time="2025-09-12T17:42:26.077254023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:42:26.079080 containerd[1466]: time="2025-09-12T17:42:26.078851214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:42:26.105199 containerd[1466]: time="2025-09-12T17:42:26.104870232Z" level=info msg="CreateContainer within sandbox \"51763c0b8eff1a46d9af59db0eaa84c9cd6cc2d97dffef1f8232ac71022e9c1d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:42:26.122734 containerd[1466]: time="2025-09-12T17:42:26.122684777Z" level=info msg="CreateContainer within sandbox \"51763c0b8eff1a46d9af59db0eaa84c9cd6cc2d97dffef1f8232ac71022e9c1d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"deaa54ac75e2239000a2ae9e5821b8b85f11730c87f8fcb368e4ab31448eadf3\"" Sep 12 17:42:26.123488 containerd[1466]: time="2025-09-12T17:42:26.123292899Z" level=info msg="StartContainer for \"deaa54ac75e2239000a2ae9e5821b8b85f11730c87f8fcb368e4ab31448eadf3\"" Sep 12 17:42:26.182315 systemd[1]: Started cri-containerd-deaa54ac75e2239000a2ae9e5821b8b85f11730c87f8fcb368e4ab31448eadf3.scope - libcontainer container deaa54ac75e2239000a2ae9e5821b8b85f11730c87f8fcb368e4ab31448eadf3. Sep 12 17:42:26.247972 containerd[1466]: time="2025-09-12T17:42:26.247906689Z" level=info msg="StartContainer for \"deaa54ac75e2239000a2ae9e5821b8b85f11730c87f8fcb368e4ab31448eadf3\" returns successfully" Sep 12 17:42:26.610719 kubelet[2632]: E0912 17:42:26.610664 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jcz47" podUID="6f422465-d46f-4508-ba40-fe8c850f3aa6" Sep 12 17:42:26.708703 kubelet[2632]: I0912 17:42:26.708453 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f9cb86746-qkfrk" podStartSLOduration=2.201519818 podStartE2EDuration="4.708430682s" podCreationTimestamp="2025-09-12 17:42:22 +0000 UTC" firstStartedPulling="2025-09-12 17:42:23.57175675 +0000 UTC m=+21.167399860" lastFinishedPulling="2025-09-12 17:42:26.078667617 +0000 UTC m=+23.674310724" observedRunningTime="2025-09-12 17:42:26.707767531 +0000 UTC m=+24.303410653" watchObservedRunningTime="2025-09-12 17:42:26.708430682 +0000 UTC m=+24.304073799" Sep 12 17:42:26.752825 kubelet[2632]: E0912 17:42:26.752788 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.754356 kubelet[2632]: W0912 17:42:26.754135 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.754356 kubelet[2632]: E0912 17:42:26.754185 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.754744 kubelet[2632]: E0912 17:42:26.754649 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.754744 kubelet[2632]: W0912 17:42:26.754667 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.754744 kubelet[2632]: E0912 17:42:26.754683 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.755757 kubelet[2632]: E0912 17:42:26.755632 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.755757 kubelet[2632]: W0912 17:42:26.755651 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.755757 kubelet[2632]: E0912 17:42:26.755670 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.756666 kubelet[2632]: E0912 17:42:26.756514 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.756666 kubelet[2632]: W0912 17:42:26.756530 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.756666 kubelet[2632]: E0912 17:42:26.756543 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.757073 kubelet[2632]: E0912 17:42:26.756847 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.757073 kubelet[2632]: W0912 17:42:26.756858 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.757073 kubelet[2632]: E0912 17:42:26.756870 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.757336 kubelet[2632]: E0912 17:42:26.757323 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.757448 kubelet[2632]: W0912 17:42:26.757387 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.757448 kubelet[2632]: E0912 17:42:26.757403 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.758383 kubelet[2632]: E0912 17:42:26.758216 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.758383 kubelet[2632]: W0912 17:42:26.758235 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.758383 kubelet[2632]: E0912 17:42:26.758252 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.758838 kubelet[2632]: E0912 17:42:26.758825 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.758971 kubelet[2632]: W0912 17:42:26.758895 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.758971 kubelet[2632]: E0912 17:42:26.758911 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.759403 kubelet[2632]: E0912 17:42:26.759380 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.759403 kubelet[2632]: W0912 17:42:26.759399 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.759561 kubelet[2632]: E0912 17:42:26.759416 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.759818 kubelet[2632]: E0912 17:42:26.759798 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.759818 kubelet[2632]: W0912 17:42:26.759817 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.759952 kubelet[2632]: E0912 17:42:26.759838 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.760224 kubelet[2632]: E0912 17:42:26.760204 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.760224 kubelet[2632]: W0912 17:42:26.760221 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.760363 kubelet[2632]: E0912 17:42:26.760237 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.760643 kubelet[2632]: E0912 17:42:26.760624 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.760643 kubelet[2632]: W0912 17:42:26.760642 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.760763 kubelet[2632]: E0912 17:42:26.760660 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.761022 kubelet[2632]: E0912 17:42:26.761002 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.761022 kubelet[2632]: W0912 17:42:26.761019 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.761243 kubelet[2632]: E0912 17:42:26.761044 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.761437 kubelet[2632]: E0912 17:42:26.761417 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.761437 kubelet[2632]: W0912 17:42:26.761435 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.761558 kubelet[2632]: E0912 17:42:26.761452 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.761820 kubelet[2632]: E0912 17:42:26.761802 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.761820 kubelet[2632]: W0912 17:42:26.761819 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.761948 kubelet[2632]: E0912 17:42:26.761834 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.787256 kubelet[2632]: E0912 17:42:26.787225 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.787256 kubelet[2632]: W0912 17:42:26.787252 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.787445 kubelet[2632]: E0912 17:42:26.787276 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.787730 kubelet[2632]: E0912 17:42:26.787704 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.787730 kubelet[2632]: W0912 17:42:26.787725 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.787920 kubelet[2632]: E0912 17:42:26.787759 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.788241 kubelet[2632]: E0912 17:42:26.788219 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.788241 kubelet[2632]: W0912 17:42:26.788238 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.788423 kubelet[2632]: E0912 17:42:26.788256 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.788739 kubelet[2632]: E0912 17:42:26.788715 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.788827 kubelet[2632]: W0912 17:42:26.788740 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.788827 kubelet[2632]: E0912 17:42:26.788759 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.789361 kubelet[2632]: E0912 17:42:26.789196 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.789361 kubelet[2632]: W0912 17:42:26.789215 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.789361 kubelet[2632]: E0912 17:42:26.789233 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.789658 kubelet[2632]: E0912 17:42:26.789637 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.789658 kubelet[2632]: W0912 17:42:26.789656 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.789795 kubelet[2632]: E0912 17:42:26.789673 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.790113 kubelet[2632]: E0912 17:42:26.790064 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.790113 kubelet[2632]: W0912 17:42:26.790082 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.790257 kubelet[2632]: E0912 17:42:26.790114 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.790507 kubelet[2632]: E0912 17:42:26.790478 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.790507 kubelet[2632]: W0912 17:42:26.790497 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.790644 kubelet[2632]: E0912 17:42:26.790518 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.790934 kubelet[2632]: E0912 17:42:26.790911 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.790934 kubelet[2632]: W0912 17:42:26.790931 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.791046 kubelet[2632]: E0912 17:42:26.790948 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.791489 kubelet[2632]: E0912 17:42:26.791467 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.791489 kubelet[2632]: W0912 17:42:26.791487 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.791645 kubelet[2632]: E0912 17:42:26.791504 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.791907 kubelet[2632]: E0912 17:42:26.791887 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.791907 kubelet[2632]: W0912 17:42:26.791904 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.792053 kubelet[2632]: E0912 17:42:26.791921 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.792330 kubelet[2632]: E0912 17:42:26.792310 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.792330 kubelet[2632]: W0912 17:42:26.792328 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.792739 kubelet[2632]: E0912 17:42:26.792344 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.792804 kubelet[2632]: E0912 17:42:26.792739 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.792804 kubelet[2632]: W0912 17:42:26.792753 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.792804 kubelet[2632]: E0912 17:42:26.792770 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.793784 kubelet[2632]: E0912 17:42:26.793243 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.793784 kubelet[2632]: W0912 17:42:26.793257 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.793784 kubelet[2632]: E0912 17:42:26.793276 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.793784 kubelet[2632]: E0912 17:42:26.793719 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.793784 kubelet[2632]: W0912 17:42:26.793734 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.793784 kubelet[2632]: E0912 17:42:26.793750 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.794222 kubelet[2632]: E0912 17:42:26.794189 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.794222 kubelet[2632]: W0912 17:42:26.794204 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.794328 kubelet[2632]: E0912 17:42:26.794259 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.794850 kubelet[2632]: E0912 17:42:26.794830 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.794850 kubelet[2632]: W0912 17:42:26.794847 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.795001 kubelet[2632]: E0912 17:42:26.794865 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:26.795335 kubelet[2632]: E0912 17:42:26.795315 2632 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:26.795335 kubelet[2632]: W0912 17:42:26.795333 2632 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:26.795459 kubelet[2632]: E0912 17:42:26.795350 2632 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:27.006342 containerd[1466]: time="2025-09-12T17:42:27.006274034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:27.007512 containerd[1466]: time="2025-09-12T17:42:27.007443567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:42:27.009138 containerd[1466]: time="2025-09-12T17:42:27.008389879Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:27.011662 containerd[1466]: time="2025-09-12T17:42:27.011625043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:27.012553 containerd[1466]: time="2025-09-12T17:42:27.012498284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 933.603179ms" Sep 12 17:42:27.012666 containerd[1466]: time="2025-09-12T17:42:27.012559579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:42:27.017973 containerd[1466]: time="2025-09-12T17:42:27.017902408Z" level=info msg="CreateContainer within sandbox \"a78c30c816076db20af9185721ca81359ff9c94ec3423f6ffc84ea45e2ce06af\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:42:27.046469 containerd[1466]: time="2025-09-12T17:42:27.046356948Z" level=info msg="CreateContainer within sandbox \"a78c30c816076db20af9185721ca81359ff9c94ec3423f6ffc84ea45e2ce06af\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"10f5b71a402cbf137df73d927360d7b6db6d03eab103c63683fbb89def4dade6\"" Sep 12 17:42:27.049554 containerd[1466]: time="2025-09-12T17:42:27.048268577Z" level=info msg="StartContainer for \"10f5b71a402cbf137df73d927360d7b6db6d03eab103c63683fbb89def4dade6\"" Sep 12 17:42:27.087329 systemd[1]: Started cri-containerd-10f5b71a402cbf137df73d927360d7b6db6d03eab103c63683fbb89def4dade6.scope - libcontainer container 10f5b71a402cbf137df73d927360d7b6db6d03eab103c63683fbb89def4dade6. Sep 12 17:42:27.135907 containerd[1466]: time="2025-09-12T17:42:27.135821919Z" level=info msg="StartContainer for \"10f5b71a402cbf137df73d927360d7b6db6d03eab103c63683fbb89def4dade6\" returns successfully" Sep 12 17:42:27.163463 systemd[1]: cri-containerd-10f5b71a402cbf137df73d927360d7b6db6d03eab103c63683fbb89def4dade6.scope: Deactivated successfully. Sep 12 17:42:27.208683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10f5b71a402cbf137df73d927360d7b6db6d03eab103c63683fbb89def4dade6-rootfs.mount: Deactivated successfully. Sep 12 17:42:27.866313 containerd[1466]: time="2025-09-12T17:42:27.866102362Z" level=info msg="shim disconnected" id=10f5b71a402cbf137df73d927360d7b6db6d03eab103c63683fbb89def4dade6 namespace=k8s.io Sep 12 17:42:27.866313 containerd[1466]: time="2025-09-12T17:42:27.866218047Z" level=warning msg="cleaning up after shim disconnected" id=10f5b71a402cbf137df73d927360d7b6db6d03eab103c63683fbb89def4dade6 namespace=k8s.io Sep 12 17:42:27.866717 containerd[1466]: time="2025-09-12T17:42:27.866235832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:42:28.611214 kubelet[2632]: E0912 17:42:28.610739 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jcz47" podUID="6f422465-d46f-4508-ba40-fe8c850f3aa6" Sep 12 17:42:28.698351 containerd[1466]: time="2025-09-12T17:42:28.698289522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:42:30.611118 kubelet[2632]: E0912 17:42:30.611050 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jcz47" podUID="6f422465-d46f-4508-ba40-fe8c850f3aa6" Sep 12 17:42:31.851988 containerd[1466]: time="2025-09-12T17:42:31.851895992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:31.854030 containerd[1466]: time="2025-09-12T17:42:31.853576969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:42:31.856227 containerd[1466]: time="2025-09-12T17:42:31.856144796Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:31.860121 containerd[1466]: time="2025-09-12T17:42:31.858989716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:31.860121 containerd[1466]: time="2025-09-12T17:42:31.859936516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.161592478s" Sep 12 17:42:31.860121 containerd[1466]: time="2025-09-12T17:42:31.859986870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:42:31.866306 containerd[1466]: time="2025-09-12T17:42:31.866258688Z" level=info msg="CreateContainer within sandbox \"a78c30c816076db20af9185721ca81359ff9c94ec3423f6ffc84ea45e2ce06af\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:42:31.886160 containerd[1466]: time="2025-09-12T17:42:31.886116954Z" level=info msg="CreateContainer within sandbox \"a78c30c816076db20af9185721ca81359ff9c94ec3423f6ffc84ea45e2ce06af\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a613d37f81978311eab7cbc5948b75ea6c1fcf9a64660b343c30e45c4c9bf5bb\"" Sep 12 17:42:31.887035 containerd[1466]: time="2025-09-12T17:42:31.886966573Z" level=info msg="StartContainer for \"a613d37f81978311eab7cbc5948b75ea6c1fcf9a64660b343c30e45c4c9bf5bb\"" Sep 12 17:42:31.934576 systemd[1]: Started cri-containerd-a613d37f81978311eab7cbc5948b75ea6c1fcf9a64660b343c30e45c4c9bf5bb.scope - libcontainer container a613d37f81978311eab7cbc5948b75ea6c1fcf9a64660b343c30e45c4c9bf5bb. Sep 12 17:42:31.974300 containerd[1466]: time="2025-09-12T17:42:31.974237645Z" level=info msg="StartContainer for \"a613d37f81978311eab7cbc5948b75ea6c1fcf9a64660b343c30e45c4c9bf5bb\" returns successfully" Sep 12 17:42:32.611675 kubelet[2632]: E0912 17:42:32.610509 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jcz47" podUID="6f422465-d46f-4508-ba40-fe8c850f3aa6" Sep 12 17:42:33.023501 systemd[1]: cri-containerd-a613d37f81978311eab7cbc5948b75ea6c1fcf9a64660b343c30e45c4c9bf5bb.scope: Deactivated successfully. Sep 12 17:42:33.061769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a613d37f81978311eab7cbc5948b75ea6c1fcf9a64660b343c30e45c4c9bf5bb-rootfs.mount: Deactivated successfully. Sep 12 17:42:33.102416 kubelet[2632]: I0912 17:42:33.100579 2632 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:42:33.433561 kubelet[2632]: I0912 17:42:33.433485 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d23bfb6-ec70-4170-8790-d653e47690fd-config-volume\") pod \"coredns-674b8bbfcf-nf9rd\" (UID: \"1d23bfb6-ec70-4170-8790-d653e47690fd\") " pod="kube-system/coredns-674b8bbfcf-nf9rd" Sep 12 17:42:33.433561 kubelet[2632]: I0912 17:42:33.433542 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmhcm\" (UniqueName: \"kubernetes.io/projected/1d23bfb6-ec70-4170-8790-d653e47690fd-kube-api-access-hmhcm\") pod \"coredns-674b8bbfcf-nf9rd\" (UID: \"1d23bfb6-ec70-4170-8790-d653e47690fd\") " pod="kube-system/coredns-674b8bbfcf-nf9rd" Sep 12 17:42:33.535884 systemd[1]: Created slice kubepods-burstable-pod1d23bfb6_ec70_4170_8790_d653e47690fd.slice - libcontainer container kubepods-burstable-pod1d23bfb6_ec70_4170_8790_d653e47690fd.slice. Sep 12 17:42:33.618331 containerd[1466]: time="2025-09-12T17:42:33.617496491Z" level=info msg="shim disconnected" id=a613d37f81978311eab7cbc5948b75ea6c1fcf9a64660b343c30e45c4c9bf5bb namespace=k8s.io Sep 12 17:42:33.618331 containerd[1466]: time="2025-09-12T17:42:33.617671760Z" level=warning msg="cleaning up after shim disconnected" id=a613d37f81978311eab7cbc5948b75ea6c1fcf9a64660b343c30e45c4c9bf5bb namespace=k8s.io Sep 12 17:42:33.618331 containerd[1466]: time="2025-09-12T17:42:33.617706518Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:42:33.640669 kubelet[2632]: I0912 17:42:33.635178 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8-whisker-backend-key-pair\") pod \"whisker-7896b6f685-hz8pp\" (UID: \"bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8\") " pod="calico-system/whisker-7896b6f685-hz8pp" Sep 12 17:42:33.640669 kubelet[2632]: I0912 17:42:33.635233 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knc4z\" (UniqueName: \"kubernetes.io/projected/bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8-kube-api-access-knc4z\") pod \"whisker-7896b6f685-hz8pp\" (UID: \"bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8\") " pod="calico-system/whisker-7896b6f685-hz8pp" Sep 12 17:42:33.640669 kubelet[2632]: I0912 17:42:33.635271 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlj5g\" (UniqueName: \"kubernetes.io/projected/1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d-kube-api-access-tlj5g\") pod \"calico-apiserver-fd4c4dbff-4fwwl\" (UID: \"1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d\") " pod="calico-apiserver/calico-apiserver-fd4c4dbff-4fwwl" Sep 12 17:42:33.640669 kubelet[2632]: I0912 17:42:33.635307 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d-calico-apiserver-certs\") pod \"calico-apiserver-fd4c4dbff-4fwwl\" (UID: \"1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d\") " pod="calico-apiserver/calico-apiserver-fd4c4dbff-4fwwl" Sep 12 17:42:33.640669 kubelet[2632]: I0912 17:42:33.635379 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8-whisker-ca-bundle\") pod \"whisker-7896b6f685-hz8pp\" (UID: \"bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8\") " pod="calico-system/whisker-7896b6f685-hz8pp" Sep 12 17:42:33.637318 systemd[1]: Created slice kubepods-besteffort-pod1ba79efb_0e0a_4d76_a5e1_787d8e35bc0d.slice - libcontainer container kubepods-besteffort-pod1ba79efb_0e0a_4d76_a5e1_787d8e35bc0d.slice. Sep 12 17:42:33.656165 containerd[1466]: time="2025-09-12T17:42:33.655402832Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:42:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:42:33.666671 systemd[1]: Created slice kubepods-besteffort-podbd283cb2_6fbc_43c3_935a_ca8ffdd0b2b8.slice - libcontainer container kubepods-besteffort-podbd283cb2_6fbc_43c3_935a_ca8ffdd0b2b8.slice. Sep 12 17:42:33.684619 systemd[1]: Created slice kubepods-besteffort-pod183ac9eb_36cd_4016_8ce1_d2ef0ae1a87d.slice - libcontainer container kubepods-besteffort-pod183ac9eb_36cd_4016_8ce1_d2ef0ae1a87d.slice. Sep 12 17:42:33.693935 systemd[1]: Created slice kubepods-burstable-pod8cf3bd71_c352_4c33_a791_2ad40156deb1.slice - libcontainer container kubepods-burstable-pod8cf3bd71_c352_4c33_a791_2ad40156deb1.slice. Sep 12 17:42:33.710018 systemd[1]: Created slice kubepods-besteffort-pod3ca1b676_84c9_4bc8_ae1d_208eeead353b.slice - libcontainer container kubepods-besteffort-pod3ca1b676_84c9_4bc8_ae1d_208eeead353b.slice. Sep 12 17:42:33.719646 systemd[1]: Created slice kubepods-besteffort-pod07aac094_4524_4cf9_bf29_6a01c3a02371.slice - libcontainer container kubepods-besteffort-pod07aac094_4524_4cf9_bf29_6a01c3a02371.slice. Sep 12 17:42:33.722995 containerd[1466]: time="2025-09-12T17:42:33.722921757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:42:33.736431 kubelet[2632]: I0912 17:42:33.736369 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07aac094-4524-4cf9-bf29-6a01c3a02371-config\") pod \"goldmane-54d579b49d-ncq8h\" (UID: \"07aac094-4524-4cf9-bf29-6a01c3a02371\") " pod="calico-system/goldmane-54d579b49d-ncq8h" Sep 12 17:42:33.736798 kubelet[2632]: I0912 17:42:33.736704 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brnjl\" (UniqueName: \"kubernetes.io/projected/07aac094-4524-4cf9-bf29-6a01c3a02371-kube-api-access-brnjl\") pod \"goldmane-54d579b49d-ncq8h\" (UID: \"07aac094-4524-4cf9-bf29-6a01c3a02371\") " pod="calico-system/goldmane-54d579b49d-ncq8h" Sep 12 17:42:33.737952 kubelet[2632]: I0912 17:42:33.736917 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d-calico-apiserver-certs\") pod \"calico-apiserver-fd4c4dbff-cp27h\" (UID: \"183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d\") " pod="calico-apiserver/calico-apiserver-fd4c4dbff-cp27h" Sep 12 17:42:33.737952 kubelet[2632]: I0912 17:42:33.737158 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k6qp\" (UniqueName: \"kubernetes.io/projected/183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d-kube-api-access-8k6qp\") pod \"calico-apiserver-fd4c4dbff-cp27h\" (UID: \"183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d\") " pod="calico-apiserver/calico-apiserver-fd4c4dbff-cp27h" Sep 12 17:42:33.737952 kubelet[2632]: I0912 17:42:33.737219 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cf3bd71-c352-4c33-a791-2ad40156deb1-config-volume\") pod \"coredns-674b8bbfcf-8l2m5\" (UID: \"8cf3bd71-c352-4c33-a791-2ad40156deb1\") " pod="kube-system/coredns-674b8bbfcf-8l2m5" Sep 12 17:42:33.737952 kubelet[2632]: I0912 17:42:33.737250 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07aac094-4524-4cf9-bf29-6a01c3a02371-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-ncq8h\" (UID: \"07aac094-4524-4cf9-bf29-6a01c3a02371\") " pod="calico-system/goldmane-54d579b49d-ncq8h" Sep 12 17:42:33.737952 kubelet[2632]: I0912 17:42:33.737278 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/07aac094-4524-4cf9-bf29-6a01c3a02371-goldmane-key-pair\") pod \"goldmane-54d579b49d-ncq8h\" (UID: \"07aac094-4524-4cf9-bf29-6a01c3a02371\") " pod="calico-system/goldmane-54d579b49d-ncq8h" Sep 12 17:42:33.738329 kubelet[2632]: I0912 17:42:33.737344 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzr27\" (UniqueName: \"kubernetes.io/projected/8cf3bd71-c352-4c33-a791-2ad40156deb1-kube-api-access-wzr27\") pod \"coredns-674b8bbfcf-8l2m5\" (UID: \"8cf3bd71-c352-4c33-a791-2ad40156deb1\") " pod="kube-system/coredns-674b8bbfcf-8l2m5" Sep 12 17:42:33.738329 kubelet[2632]: I0912 17:42:33.737377 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tbc7\" (UniqueName: \"kubernetes.io/projected/3ca1b676-84c9-4bc8-ae1d-208eeead353b-kube-api-access-5tbc7\") pod \"calico-kube-controllers-6fdfb6d66f-lczz9\" (UID: \"3ca1b676-84c9-4bc8-ae1d-208eeead353b\") " pod="calico-system/calico-kube-controllers-6fdfb6d66f-lczz9" Sep 12 17:42:33.747024 kubelet[2632]: I0912 17:42:33.746992 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ca1b676-84c9-4bc8-ae1d-208eeead353b-tigera-ca-bundle\") pod \"calico-kube-controllers-6fdfb6d66f-lczz9\" (UID: \"3ca1b676-84c9-4bc8-ae1d-208eeead353b\") " pod="calico-system/calico-kube-controllers-6fdfb6d66f-lczz9" Sep 12 17:42:33.840561 containerd[1466]: time="2025-09-12T17:42:33.840512093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nf9rd,Uid:1d23bfb6-ec70-4170-8790-d653e47690fd,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:33.956637 containerd[1466]: time="2025-09-12T17:42:33.956495253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd4c4dbff-4fwwl,Uid:1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:42:33.958302 containerd[1466]: time="2025-09-12T17:42:33.958252276Z" level=error msg="Failed to destroy network for sandbox \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:33.958775 containerd[1466]: time="2025-09-12T17:42:33.958723697Z" level=error msg="encountered an error cleaning up failed sandbox \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:33.958900 containerd[1466]: time="2025-09-12T17:42:33.958830336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nf9rd,Uid:1d23bfb6-ec70-4170-8790-d653e47690fd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:33.959918 kubelet[2632]: E0912 17:42:33.959202 2632 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:33.959918 kubelet[2632]: E0912 17:42:33.959314 2632 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nf9rd" Sep 12 17:42:33.959918 kubelet[2632]: E0912 17:42:33.959350 2632 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nf9rd" Sep 12 17:42:33.960222 kubelet[2632]: E0912 17:42:33.959429 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nf9rd_kube-system(1d23bfb6-ec70-4170-8790-d653e47690fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nf9rd_kube-system(1d23bfb6-ec70-4170-8790-d653e47690fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nf9rd" podUID="1d23bfb6-ec70-4170-8790-d653e47690fd" Sep 12 17:42:33.987419 containerd[1466]: time="2025-09-12T17:42:33.987040187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7896b6f685-hz8pp,Uid:bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:33.992621 containerd[1466]: time="2025-09-12T17:42:33.992467091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd4c4dbff-cp27h,Uid:183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:42:34.002395 containerd[1466]: time="2025-09-12T17:42:34.002354867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8l2m5,Uid:8cf3bd71-c352-4c33-a791-2ad40156deb1,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:34.024868 containerd[1466]: time="2025-09-12T17:42:34.024815915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fdfb6d66f-lczz9,Uid:3ca1b676-84c9-4bc8-ae1d-208eeead353b,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:34.028842 containerd[1466]: time="2025-09-12T17:42:34.028796481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ncq8h,Uid:07aac094-4524-4cf9-bf29-6a01c3a02371,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:34.194700 containerd[1466]: time="2025-09-12T17:42:34.194534559Z" level=error msg="Failed to destroy network for sandbox \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.195351 containerd[1466]: time="2025-09-12T17:42:34.195210033Z" level=error msg="encountered an error cleaning up failed sandbox \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.195351 containerd[1466]: time="2025-09-12T17:42:34.195290791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd4c4dbff-4fwwl,Uid:1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.198384 kubelet[2632]: E0912 17:42:34.197805 2632 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.198384 kubelet[2632]: E0912 17:42:34.198234 2632 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fd4c4dbff-4fwwl" Sep 12 17:42:34.198384 kubelet[2632]: E0912 17:42:34.198307 2632 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fd4c4dbff-4fwwl" Sep 12 17:42:34.199848 kubelet[2632]: E0912 17:42:34.198746 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fd4c4dbff-4fwwl_calico-apiserver(1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fd4c4dbff-4fwwl_calico-apiserver(1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fd4c4dbff-4fwwl" podUID="1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d" Sep 12 17:42:34.203000 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5-shm.mount: Deactivated successfully. Sep 12 17:42:34.375929 containerd[1466]: time="2025-09-12T17:42:34.375873066Z" level=error msg="Failed to destroy network for sandbox \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.376608 containerd[1466]: time="2025-09-12T17:42:34.376560390Z" level=error msg="encountered an error cleaning up failed sandbox \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.377389 containerd[1466]: time="2025-09-12T17:42:34.377331961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd4c4dbff-cp27h,Uid:183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.378083 kubelet[2632]: E0912 17:42:34.378033 2632 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.378427 kubelet[2632]: E0912 17:42:34.378244 2632 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fd4c4dbff-cp27h" Sep 12 17:42:34.378427 kubelet[2632]: E0912 17:42:34.378305 2632 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fd4c4dbff-cp27h" Sep 12 17:42:34.380483 kubelet[2632]: E0912 17:42:34.380368 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fd4c4dbff-cp27h_calico-apiserver(183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fd4c4dbff-cp27h_calico-apiserver(183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fd4c4dbff-cp27h" podUID="183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d" Sep 12 17:42:34.390712 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf-shm.mount: Deactivated successfully. Sep 12 17:42:34.398368 containerd[1466]: time="2025-09-12T17:42:34.398221687Z" level=error msg="Failed to destroy network for sandbox \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.398962 containerd[1466]: time="2025-09-12T17:42:34.398778477Z" level=error msg="encountered an error cleaning up failed sandbox \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.398962 containerd[1466]: time="2025-09-12T17:42:34.398849765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7896b6f685-hz8pp,Uid:bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.402228 kubelet[2632]: E0912 17:42:34.399300 2632 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.402228 kubelet[2632]: E0912 17:42:34.399370 2632 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7896b6f685-hz8pp" Sep 12 17:42:34.402228 kubelet[2632]: E0912 17:42:34.399415 2632 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7896b6f685-hz8pp" Sep 12 17:42:34.402462 kubelet[2632]: E0912 17:42:34.399517 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7896b6f685-hz8pp_calico-system(bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7896b6f685-hz8pp_calico-system(bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7896b6f685-hz8pp" podUID="bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8" Sep 12 17:42:34.413237 containerd[1466]: time="2025-09-12T17:42:34.413192343Z" level=error msg="Failed to destroy network for sandbox \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.413896 containerd[1466]: time="2025-09-12T17:42:34.413806981Z" level=error msg="encountered an error cleaning up failed sandbox \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.414128 containerd[1466]: time="2025-09-12T17:42:34.414062433Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fdfb6d66f-lczz9,Uid:3ca1b676-84c9-4bc8-ae1d-208eeead353b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.414617 kubelet[2632]: E0912 17:42:34.414575 2632 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.414826 kubelet[2632]: E0912 17:42:34.414794 2632 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fdfb6d66f-lczz9" Sep 12 17:42:34.414971 kubelet[2632]: E0912 17:42:34.414941 2632 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fdfb6d66f-lczz9" Sep 12 17:42:34.415700 kubelet[2632]: E0912 17:42:34.415207 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fdfb6d66f-lczz9_calico-system(3ca1b676-84c9-4bc8-ae1d-208eeead353b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fdfb6d66f-lczz9_calico-system(3ca1b676-84c9-4bc8-ae1d-208eeead353b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fdfb6d66f-lczz9" podUID="3ca1b676-84c9-4bc8-ae1d-208eeead353b" Sep 12 17:42:34.423788 containerd[1466]: time="2025-09-12T17:42:34.423711022Z" level=error msg="Failed to destroy network for sandbox \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.425359 containerd[1466]: time="2025-09-12T17:42:34.425291107Z" level=error msg="encountered an error cleaning up failed sandbox \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.425478 containerd[1466]: time="2025-09-12T17:42:34.425421400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ncq8h,Uid:07aac094-4524-4cf9-bf29-6a01c3a02371,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.425793 kubelet[2632]: E0912 17:42:34.425724 2632 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.425905 kubelet[2632]: E0912 17:42:34.425790 2632 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ncq8h" Sep 12 17:42:34.425905 kubelet[2632]: E0912 17:42:34.425827 2632 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ncq8h" Sep 12 17:42:34.426160 kubelet[2632]: E0912 17:42:34.425892 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-ncq8h_calico-system(07aac094-4524-4cf9-bf29-6a01c3a02371)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-ncq8h_calico-system(07aac094-4524-4cf9-bf29-6a01c3a02371)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-ncq8h" podUID="07aac094-4524-4cf9-bf29-6a01c3a02371" Sep 12 17:42:34.431378 containerd[1466]: time="2025-09-12T17:42:34.431268647Z" level=error msg="Failed to destroy network for sandbox \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.431852 containerd[1466]: time="2025-09-12T17:42:34.431791194Z" level=error msg="encountered an error cleaning up failed sandbox \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.432040 containerd[1466]: time="2025-09-12T17:42:34.431866213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8l2m5,Uid:8cf3bd71-c352-4c33-a791-2ad40156deb1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.432214 kubelet[2632]: E0912 17:42:34.432155 2632 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.432345 kubelet[2632]: E0912 17:42:34.432215 2632 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8l2m5" Sep 12 17:42:34.432345 kubelet[2632]: E0912 17:42:34.432248 2632 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8l2m5" Sep 12 17:42:34.432345 kubelet[2632]: E0912 17:42:34.432326 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8l2m5_kube-system(8cf3bd71-c352-4c33-a791-2ad40156deb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8l2m5_kube-system(8cf3bd71-c352-4c33-a791-2ad40156deb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8l2m5" podUID="8cf3bd71-c352-4c33-a791-2ad40156deb1" Sep 12 17:42:34.619695 systemd[1]: Created slice kubepods-besteffort-pod6f422465_d46f_4508_ba40_fe8c850f3aa6.slice - libcontainer container kubepods-besteffort-pod6f422465_d46f_4508_ba40_fe8c850f3aa6.slice. Sep 12 17:42:34.622590 containerd[1466]: time="2025-09-12T17:42:34.622543328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jcz47,Uid:6f422465-d46f-4508-ba40-fe8c850f3aa6,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:34.731446 containerd[1466]: time="2025-09-12T17:42:34.731257011Z" level=error msg="Failed to destroy network for sandbox \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.732415 containerd[1466]: time="2025-09-12T17:42:34.731951660Z" level=error msg="encountered an error cleaning up failed sandbox \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.732415 containerd[1466]: time="2025-09-12T17:42:34.732034380Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jcz47,Uid:6f422465-d46f-4508-ba40-fe8c850f3aa6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.736152 kubelet[2632]: E0912 17:42:34.734507 2632 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.736152 kubelet[2632]: E0912 17:42:34.734582 2632 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jcz47" Sep 12 17:42:34.736152 kubelet[2632]: E0912 17:42:34.734624 2632 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jcz47" Sep 12 17:42:34.736852 kubelet[2632]: E0912 17:42:34.734695 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jcz47_calico-system(6f422465-d46f-4508-ba40-fe8c850f3aa6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jcz47_calico-system(6f422465-d46f-4508-ba40-fe8c850f3aa6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jcz47" podUID="6f422465-d46f-4508-ba40-fe8c850f3aa6" Sep 12 17:42:34.743052 kubelet[2632]: I0912 17:42:34.741038 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Sep 12 17:42:34.746564 containerd[1466]: time="2025-09-12T17:42:34.745782138Z" level=info msg="StopPodSandbox for \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\"" Sep 12 17:42:34.746564 containerd[1466]: time="2025-09-12T17:42:34.746031252Z" level=info msg="Ensure that sandbox 5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654 in task-service has been cleanup successfully" Sep 12 17:42:34.759575 kubelet[2632]: I0912 17:42:34.759544 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Sep 12 17:42:34.763685 containerd[1466]: time="2025-09-12T17:42:34.762322052Z" level=info msg="StopPodSandbox for \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\"" Sep 12 17:42:34.763685 containerd[1466]: time="2025-09-12T17:42:34.762651598Z" level=info msg="Ensure that sandbox 2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057 in task-service has been cleanup successfully" Sep 12 17:42:34.768791 kubelet[2632]: I0912 17:42:34.768761 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Sep 12 17:42:34.771494 containerd[1466]: time="2025-09-12T17:42:34.771449155Z" level=info msg="StopPodSandbox for \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\"" Sep 12 17:42:34.771734 containerd[1466]: time="2025-09-12T17:42:34.771691418Z" level=info msg="Ensure that sandbox 0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1 in task-service has been cleanup successfully" Sep 12 17:42:34.781321 kubelet[2632]: I0912 17:42:34.781290 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Sep 12 17:42:34.788506 containerd[1466]: time="2025-09-12T17:42:34.788471758Z" level=info msg="StopPodSandbox for \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\"" Sep 12 17:42:34.788846 containerd[1466]: time="2025-09-12T17:42:34.788814810Z" level=info msg="Ensure that sandbox 0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a in task-service has been cleanup successfully" Sep 12 17:42:34.809405 kubelet[2632]: I0912 17:42:34.809375 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Sep 12 17:42:34.811570 containerd[1466]: time="2025-09-12T17:42:34.811500168Z" level=info msg="StopPodSandbox for \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\"" Sep 12 17:42:34.813000 containerd[1466]: time="2025-09-12T17:42:34.812899733Z" level=info msg="Ensure that sandbox f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf in task-service has been cleanup successfully" Sep 12 17:42:34.821637 kubelet[2632]: I0912 17:42:34.821346 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Sep 12 17:42:34.824938 containerd[1466]: time="2025-09-12T17:42:34.824725576Z" level=info msg="StopPodSandbox for \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\"" Sep 12 17:42:34.825685 containerd[1466]: time="2025-09-12T17:42:34.825557331Z" level=info msg="Ensure that sandbox 0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5 in task-service has been cleanup successfully" Sep 12 17:42:34.831436 kubelet[2632]: I0912 17:42:34.830814 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Sep 12 17:42:34.833196 containerd[1466]: time="2025-09-12T17:42:34.832903715Z" level=info msg="StopPodSandbox for \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\"" Sep 12 17:42:34.841315 containerd[1466]: time="2025-09-12T17:42:34.841062536Z" level=info msg="Ensure that sandbox 6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995 in task-service has been cleanup successfully" Sep 12 17:42:34.993384 containerd[1466]: time="2025-09-12T17:42:34.993190269Z" level=error msg="StopPodSandbox for \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\" failed" error="failed to destroy network for sandbox \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.995598 kubelet[2632]: E0912 17:42:34.994478 2632 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Sep 12 17:42:34.995598 kubelet[2632]: E0912 17:42:34.994561 2632 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654"} Sep 12 17:42:34.995598 kubelet[2632]: E0912 17:42:34.994691 2632 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"07aac094-4524-4cf9-bf29-6a01c3a02371\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:34.995598 kubelet[2632]: E0912 17:42:34.994731 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"07aac094-4524-4cf9-bf29-6a01c3a02371\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-ncq8h" podUID="07aac094-4524-4cf9-bf29-6a01c3a02371" Sep 12 17:42:34.998066 containerd[1466]: time="2025-09-12T17:42:34.997983404Z" level=error msg="StopPodSandbox for \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\" failed" error="failed to destroy network for sandbox \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:34.998876 kubelet[2632]: E0912 17:42:34.998661 2632 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Sep 12 17:42:34.998876 kubelet[2632]: E0912 17:42:34.998715 2632 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995"} Sep 12 17:42:34.998876 kubelet[2632]: E0912 17:42:34.998761 2632 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d23bfb6-ec70-4170-8790-d653e47690fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:34.998876 kubelet[2632]: E0912 17:42:34.998806 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d23bfb6-ec70-4170-8790-d653e47690fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nf9rd" podUID="1d23bfb6-ec70-4170-8790-d653e47690fd" Sep 12 17:42:35.012282 containerd[1466]: time="2025-09-12T17:42:35.012231194Z" level=error msg="StopPodSandbox for \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\" failed" error="failed to destroy network for sandbox \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:35.012678 kubelet[2632]: E0912 17:42:35.012634 2632 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Sep 12 17:42:35.012862 kubelet[2632]: E0912 17:42:35.012836 2632 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057"} Sep 12 17:42:35.013047 kubelet[2632]: E0912 17:42:35.013020 2632 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8cf3bd71-c352-4c33-a791-2ad40156deb1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:35.014119 kubelet[2632]: E0912 17:42:35.013250 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8cf3bd71-c352-4c33-a791-2ad40156deb1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8l2m5" podUID="8cf3bd71-c352-4c33-a791-2ad40156deb1" Sep 12 17:42:35.020689 containerd[1466]: time="2025-09-12T17:42:35.020604828Z" level=error msg="StopPodSandbox for \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\" failed" error="failed to destroy network for sandbox \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:35.020919 kubelet[2632]: E0912 17:42:35.020868 2632 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Sep 12 17:42:35.021018 containerd[1466]: time="2025-09-12T17:42:35.020974446Z" level=error msg="StopPodSandbox for \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\" failed" error="failed to destroy network for sandbox \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:35.022205 kubelet[2632]: E0912 17:42:35.022170 2632 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a"} Sep 12 17:42:35.022627 kubelet[2632]: E0912 17:42:35.021237 2632 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Sep 12 17:42:35.022627 kubelet[2632]: E0912 17:42:35.022434 2632 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1"} Sep 12 17:42:35.022627 kubelet[2632]: E0912 17:42:35.022476 2632 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:35.022627 kubelet[2632]: E0912 17:42:35.022514 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7896b6f685-hz8pp" podUID="bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8" Sep 12 17:42:35.022985 kubelet[2632]: E0912 17:42:35.022382 2632 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3ca1b676-84c9-4bc8-ae1d-208eeead353b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:35.022985 kubelet[2632]: E0912 17:42:35.022592 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3ca1b676-84c9-4bc8-ae1d-208eeead353b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fdfb6d66f-lczz9" podUID="3ca1b676-84c9-4bc8-ae1d-208eeead353b" Sep 12 17:42:35.048800 containerd[1466]: time="2025-09-12T17:42:35.048734882Z" level=error msg="StopPodSandbox for \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\" failed" error="failed to destroy network for sandbox \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:35.049630 kubelet[2632]: E0912 17:42:35.049051 2632 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Sep 12 17:42:35.049749 containerd[1466]: time="2025-09-12T17:42:35.049599842Z" level=error msg="StopPodSandbox for \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\" failed" error="failed to destroy network for sandbox \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:35.050031 kubelet[2632]: E0912 17:42:35.049996 2632 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5"} Sep 12 17:42:35.050031 kubelet[2632]: E0912 17:42:35.050050 2632 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:35.050306 kubelet[2632]: E0912 17:42:35.049942 2632 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Sep 12 17:42:35.050306 kubelet[2632]: E0912 17:42:35.050245 2632 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf"} Sep 12 17:42:35.050306 kubelet[2632]: E0912 17:42:35.050298 2632 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:35.050518 kubelet[2632]: E0912 17:42:35.050330 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fd4c4dbff-cp27h" podUID="183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d" Sep 12 17:42:35.050623 kubelet[2632]: E0912 17:42:35.050101 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fd4c4dbff-4fwwl" podUID="1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d" Sep 12 17:42:35.066013 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654-shm.mount: Deactivated successfully. Sep 12 17:42:35.067294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a-shm.mount: Deactivated successfully. Sep 12 17:42:35.067762 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057-shm.mount: Deactivated successfully. Sep 12 17:42:35.068217 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1-shm.mount: Deactivated successfully. Sep 12 17:42:35.834926 kubelet[2632]: I0912 17:42:35.834883 2632 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Sep 12 17:42:35.836183 containerd[1466]: time="2025-09-12T17:42:35.836076399Z" level=info msg="StopPodSandbox for \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\"" Sep 12 17:42:35.836672 containerd[1466]: time="2025-09-12T17:42:35.836496820Z" level=info msg="Ensure that sandbox a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a in task-service has been cleanup successfully" Sep 12 17:42:35.881558 containerd[1466]: time="2025-09-12T17:42:35.881484690Z" level=error msg="StopPodSandbox for \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\" failed" error="failed to destroy network for sandbox \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:35.882186 kubelet[2632]: E0912 17:42:35.881753 2632 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Sep 12 17:42:35.882186 kubelet[2632]: E0912 17:42:35.881883 2632 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a"} Sep 12 17:42:35.882186 kubelet[2632]: E0912 17:42:35.881933 2632 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f422465-d46f-4508-ba40-fe8c850f3aa6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:35.882186 kubelet[2632]: E0912 17:42:35.881974 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f422465-d46f-4508-ba40-fe8c850f3aa6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jcz47" podUID="6f422465-d46f-4508-ba40-fe8c850f3aa6" Sep 12 17:42:41.388950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount253216752.mount: Deactivated successfully. Sep 12 17:42:41.428476 containerd[1466]: time="2025-09-12T17:42:41.428414390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:41.430499 containerd[1466]: time="2025-09-12T17:42:41.430424634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:42:41.431895 containerd[1466]: time="2025-09-12T17:42:41.431855703Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:41.434831 containerd[1466]: time="2025-09-12T17:42:41.434757538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:41.436144 containerd[1466]: time="2025-09-12T17:42:41.435710340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.712702029s" Sep 12 17:42:41.436144 containerd[1466]: time="2025-09-12T17:42:41.435758155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:42:41.460929 containerd[1466]: time="2025-09-12T17:42:41.460878898Z" level=info msg="CreateContainer within sandbox \"a78c30c816076db20af9185721ca81359ff9c94ec3423f6ffc84ea45e2ce06af\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:42:41.479635 containerd[1466]: time="2025-09-12T17:42:41.479596263Z" level=info msg="CreateContainer within sandbox \"a78c30c816076db20af9185721ca81359ff9c94ec3423f6ffc84ea45e2ce06af\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"347f91e682bbcd1ab4362f22dc54b258a6f236f26dc3a2fb920901376e41763b\"" Sep 12 17:42:41.481801 containerd[1466]: time="2025-09-12T17:42:41.481548178Z" level=info msg="StartContainer for \"347f91e682bbcd1ab4362f22dc54b258a6f236f26dc3a2fb920901376e41763b\"" Sep 12 17:42:41.523293 systemd[1]: Started cri-containerd-347f91e682bbcd1ab4362f22dc54b258a6f236f26dc3a2fb920901376e41763b.scope - libcontainer container 347f91e682bbcd1ab4362f22dc54b258a6f236f26dc3a2fb920901376e41763b. Sep 12 17:42:41.566566 containerd[1466]: time="2025-09-12T17:42:41.566436775Z" level=info msg="StartContainer for \"347f91e682bbcd1ab4362f22dc54b258a6f236f26dc3a2fb920901376e41763b\" returns successfully" Sep 12 17:42:41.693507 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:42:41.693815 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:42:41.809155 containerd[1466]: time="2025-09-12T17:42:41.808636655Z" level=info msg="StopPodSandbox for \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\"" Sep 12 17:42:41.920120 kubelet[2632]: I0912 17:42:41.919712 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wxbgw" podStartSLOduration=2.09273993 podStartE2EDuration="19.919679583s" podCreationTimestamp="2025-09-12 17:42:22 +0000 UTC" firstStartedPulling="2025-09-12 17:42:23.610258743 +0000 UTC m=+21.205901839" lastFinishedPulling="2025-09-12 17:42:41.437198395 +0000 UTC m=+39.032841492" observedRunningTime="2025-09-12 17:42:41.903430294 +0000 UTC m=+39.499073411" watchObservedRunningTime="2025-09-12 17:42:41.919679583 +0000 UTC m=+39.515322703" Sep 12 17:42:41.984059 containerd[1466]: 2025-09-12 17:42:41.918 [INFO][3847] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Sep 12 17:42:41.984059 containerd[1466]: 2025-09-12 17:42:41.918 [INFO][3847] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" iface="eth0" netns="/var/run/netns/cni-43128948-6a5f-5225-97b1-4289aa06d67e" Sep 12 17:42:41.984059 containerd[1466]: 2025-09-12 17:42:41.919 [INFO][3847] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" iface="eth0" netns="/var/run/netns/cni-43128948-6a5f-5225-97b1-4289aa06d67e" Sep 12 17:42:41.984059 containerd[1466]: 2025-09-12 17:42:41.919 [INFO][3847] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" iface="eth0" netns="/var/run/netns/cni-43128948-6a5f-5225-97b1-4289aa06d67e" Sep 12 17:42:41.984059 containerd[1466]: 2025-09-12 17:42:41.919 [INFO][3847] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Sep 12 17:42:41.984059 containerd[1466]: 2025-09-12 17:42:41.920 [INFO][3847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Sep 12 17:42:41.984059 containerd[1466]: 2025-09-12 17:42:41.966 [INFO][3856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" HandleID="k8s-pod-network.0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--7896b6f685--hz8pp-eth0" Sep 12 17:42:41.984059 containerd[1466]: 2025-09-12 17:42:41.966 [INFO][3856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:41.984059 containerd[1466]: 2025-09-12 17:42:41.966 [INFO][3856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:41.984059 containerd[1466]: 2025-09-12 17:42:41.975 [WARNING][3856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" HandleID="k8s-pod-network.0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--7896b6f685--hz8pp-eth0" Sep 12 17:42:41.984059 containerd[1466]: 2025-09-12 17:42:41.975 [INFO][3856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" HandleID="k8s-pod-network.0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--7896b6f685--hz8pp-eth0" Sep 12 17:42:41.984059 containerd[1466]: 2025-09-12 17:42:41.977 [INFO][3856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:41.984059 containerd[1466]: 2025-09-12 17:42:41.981 [INFO][3847] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Sep 12 17:42:41.984848 containerd[1466]: time="2025-09-12T17:42:41.984247199Z" level=info msg="TearDown network for sandbox \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\" successfully" Sep 12 17:42:41.984848 containerd[1466]: time="2025-09-12T17:42:41.984301348Z" level=info msg="StopPodSandbox for \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\" returns successfully" Sep 12 17:42:42.113054 kubelet[2632]: I0912 17:42:42.112495 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knc4z\" (UniqueName: \"kubernetes.io/projected/bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8-kube-api-access-knc4z\") pod \"bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8\" (UID: \"bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8\") " Sep 12 17:42:42.113054 kubelet[2632]: I0912 17:42:42.112648 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8-whisker-backend-key-pair\") pod \"bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8\" (UID: \"bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8\") " Sep 12 17:42:42.113054 kubelet[2632]: I0912 17:42:42.112739 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8-whisker-ca-bundle\") pod \"bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8\" (UID: \"bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8\") " Sep 12 17:42:42.116333 kubelet[2632]: I0912 17:42:42.113483 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8" (UID: "bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:42:42.119210 kubelet[2632]: I0912 17:42:42.119158 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8" (UID: "bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:42:42.120680 kubelet[2632]: I0912 17:42:42.120646 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8-kube-api-access-knc4z" (OuterVolumeSpecName: "kube-api-access-knc4z") pod "bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8" (UID: "bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8"). InnerVolumeSpecName "kube-api-access-knc4z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:42:42.214124 kubelet[2632]: I0912 17:42:42.214054 2632 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8-whisker-ca-bundle\") on node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" DevicePath \"\"" Sep 12 17:42:42.214124 kubelet[2632]: I0912 17:42:42.214121 2632 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-knc4z\" (UniqueName: \"kubernetes.io/projected/bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8-kube-api-access-knc4z\") on node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" DevicePath \"\"" Sep 12 17:42:42.214317 kubelet[2632]: I0912 17:42:42.214140 2632 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8-whisker-backend-key-pair\") on node \"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal\" DevicePath \"\"" Sep 12 17:42:42.387361 systemd[1]: run-netns-cni\x2d43128948\x2d6a5f\x2d5225\x2d97b1\x2d4289aa06d67e.mount: Deactivated successfully. Sep 12 17:42:42.387523 systemd[1]: var-lib-kubelet-pods-bd283cb2\x2d6fbc\x2d43c3\x2d935a\x2dca8ffdd0b2b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dknc4z.mount: Deactivated successfully. Sep 12 17:42:42.387634 systemd[1]: var-lib-kubelet-pods-bd283cb2\x2d6fbc\x2d43c3\x2d935a\x2dca8ffdd0b2b8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:42:42.619064 systemd[1]: Removed slice kubepods-besteffort-podbd283cb2_6fbc_43c3_935a_ca8ffdd0b2b8.slice - libcontainer container kubepods-besteffort-podbd283cb2_6fbc_43c3_935a_ca8ffdd0b2b8.slice. Sep 12 17:42:42.869611 kubelet[2632]: I0912 17:42:42.869570 2632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:42:42.952420 systemd[1]: Created slice kubepods-besteffort-pod51c99f10_03e5_4847_8175_25700c90dca0.slice - libcontainer container kubepods-besteffort-pod51c99f10_03e5_4847_8175_25700c90dca0.slice. Sep 12 17:42:43.020132 kubelet[2632]: I0912 17:42:43.020066 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/51c99f10-03e5-4847-8175-25700c90dca0-whisker-backend-key-pair\") pod \"whisker-6d8c5bf89b-6wwsz\" (UID: \"51c99f10-03e5-4847-8175-25700c90dca0\") " pod="calico-system/whisker-6d8c5bf89b-6wwsz" Sep 12 17:42:43.020639 kubelet[2632]: I0912 17:42:43.020163 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29jcf\" (UniqueName: \"kubernetes.io/projected/51c99f10-03e5-4847-8175-25700c90dca0-kube-api-access-29jcf\") pod \"whisker-6d8c5bf89b-6wwsz\" (UID: \"51c99f10-03e5-4847-8175-25700c90dca0\") " pod="calico-system/whisker-6d8c5bf89b-6wwsz" Sep 12 17:42:43.020639 kubelet[2632]: I0912 17:42:43.020214 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51c99f10-03e5-4847-8175-25700c90dca0-whisker-ca-bundle\") pod \"whisker-6d8c5bf89b-6wwsz\" (UID: \"51c99f10-03e5-4847-8175-25700c90dca0\") " pod="calico-system/whisker-6d8c5bf89b-6wwsz" Sep 12 17:42:43.261032 containerd[1466]: time="2025-09-12T17:42:43.260807639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d8c5bf89b-6wwsz,Uid:51c99f10-03e5-4847-8175-25700c90dca0,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:43.522069 systemd-networkd[1383]: calic2abce5daa0: Link UP Sep 12 17:42:43.528626 systemd-networkd[1383]: calic2abce5daa0: Gained carrier Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.343 [INFO][3951] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.377 [INFO][3951] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--6d8c5bf89b--6wwsz-eth0 whisker-6d8c5bf89b- calico-system 51c99f10-03e5-4847-8175-25700c90dca0 930 0 2025-09-12 17:42:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d8c5bf89b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal whisker-6d8c5bf89b-6wwsz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic2abce5daa0 [] [] }} ContainerID="e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" Namespace="calico-system" Pod="whisker-6d8c5bf89b-6wwsz" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--6d8c5bf89b--6wwsz-" Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.377 [INFO][3951] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" Namespace="calico-system" Pod="whisker-6d8c5bf89b-6wwsz" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--6d8c5bf89b--6wwsz-eth0" Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.443 [INFO][3977] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" HandleID="k8s-pod-network.e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--6d8c5bf89b--6wwsz-eth0" Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.443 [INFO][3977] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" HandleID="k8s-pod-network.e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--6d8c5bf89b--6wwsz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", "pod":"whisker-6d8c5bf89b-6wwsz", "timestamp":"2025-09-12 17:42:43.443358526 +0000 UTC"}, Hostname:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.443 [INFO][3977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.443 [INFO][3977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.444 [INFO][3977] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal' Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.456 [INFO][3977] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.462 [INFO][3977] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.468 [INFO][3977] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.470 [INFO][3977] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.474 [INFO][3977] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.474 [INFO][3977] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.476 [INFO][3977] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249 Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.482 [INFO][3977] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.494 [INFO][3977] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.193/26] block=192.168.86.192/26 handle="k8s-pod-network.e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.494 [INFO][3977] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.193/26] handle="k8s-pod-network.e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.494 [INFO][3977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:43.569977 containerd[1466]: 2025-09-12 17:42:43.494 [INFO][3977] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.193/26] IPv6=[] ContainerID="e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" HandleID="k8s-pod-network.e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--6d8c5bf89b--6wwsz-eth0" Sep 12 17:42:43.572712 containerd[1466]: 2025-09-12 17:42:43.498 [INFO][3951] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" Namespace="calico-system" Pod="whisker-6d8c5bf89b-6wwsz" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--6d8c5bf89b--6wwsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--6d8c5bf89b--6wwsz-eth0", GenerateName:"whisker-6d8c5bf89b-", Namespace:"calico-system", SelfLink:"", UID:"51c99f10-03e5-4847-8175-25700c90dca0", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d8c5bf89b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-6d8c5bf89b-6wwsz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.86.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic2abce5daa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:43.572712 containerd[1466]: 2025-09-12 17:42:43.499 [INFO][3951] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.193/32] ContainerID="e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" Namespace="calico-system" Pod="whisker-6d8c5bf89b-6wwsz" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--6d8c5bf89b--6wwsz-eth0" Sep 12 17:42:43.572712 containerd[1466]: 2025-09-12 17:42:43.499 [INFO][3951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2abce5daa0 ContainerID="e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" Namespace="calico-system" Pod="whisker-6d8c5bf89b-6wwsz" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--6d8c5bf89b--6wwsz-eth0" Sep 12 17:42:43.572712 containerd[1466]: 2025-09-12 17:42:43.523 [INFO][3951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" Namespace="calico-system" Pod="whisker-6d8c5bf89b-6wwsz" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--6d8c5bf89b--6wwsz-eth0" Sep 12 17:42:43.572712 containerd[1466]: 2025-09-12 17:42:43.527 [INFO][3951] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" Namespace="calico-system" Pod="whisker-6d8c5bf89b-6wwsz" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--6d8c5bf89b--6wwsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--6d8c5bf89b--6wwsz-eth0", GenerateName:"whisker-6d8c5bf89b-", Namespace:"calico-system", SelfLink:"", UID:"51c99f10-03e5-4847-8175-25700c90dca0", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d8c5bf89b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249", Pod:"whisker-6d8c5bf89b-6wwsz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.86.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic2abce5daa0", MAC:"36:35:55:19:b3:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:43.572712 containerd[1466]: 2025-09-12 17:42:43.565 [INFO][3951] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249" Namespace="calico-system" Pod="whisker-6d8c5bf89b-6wwsz" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--6d8c5bf89b--6wwsz-eth0" Sep 12 17:42:43.627228 containerd[1466]: time="2025-09-12T17:42:43.627117722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:43.627417 containerd[1466]: time="2025-09-12T17:42:43.627191544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:43.629179 containerd[1466]: time="2025-09-12T17:42:43.627399431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:43.633303 containerd[1466]: time="2025-09-12T17:42:43.633239108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:43.691158 systemd[1]: run-containerd-runc-k8s.io-e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249-runc.USJ2b2.mount: Deactivated successfully. Sep 12 17:42:43.703678 systemd[1]: Started cri-containerd-e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249.scope - libcontainer container e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249. Sep 12 17:42:43.814600 containerd[1466]: time="2025-09-12T17:42:43.813262228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d8c5bf89b-6wwsz,Uid:51c99f10-03e5-4847-8175-25700c90dca0,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249\"" Sep 12 17:42:43.820934 containerd[1466]: time="2025-09-12T17:42:43.820893739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:42:44.116335 kernel: bpftool[4068]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:42:44.433631 systemd-networkd[1383]: vxlan.calico: Link UP Sep 12 17:42:44.433645 systemd-networkd[1383]: vxlan.calico: Gained carrier Sep 12 17:42:44.631643 kubelet[2632]: I0912 17:42:44.631593 2632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8" path="/var/lib/kubelet/pods/bd283cb2-6fbc-43c3-935a-ca8ffdd0b2b8/volumes" Sep 12 17:42:44.970894 containerd[1466]: time="2025-09-12T17:42:44.970826463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:44.972376 containerd[1466]: time="2025-09-12T17:42:44.972219987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:42:44.974133 containerd[1466]: time="2025-09-12T17:42:44.973480467Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:44.976779 containerd[1466]: time="2025-09-12T17:42:44.976741705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:44.977948 containerd[1466]: time="2025-09-12T17:42:44.977904503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.156950865s" Sep 12 17:42:44.978048 containerd[1466]: time="2025-09-12T17:42:44.977953069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:42:44.985073 containerd[1466]: time="2025-09-12T17:42:44.985029123Z" level=info msg="CreateContainer within sandbox \"e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:42:45.007979 containerd[1466]: time="2025-09-12T17:42:45.007924445Z" level=info msg="CreateContainer within sandbox \"e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"334c9c9a2365cd8c6bb0bae334e2e4ad265e8ed421d76a01634d140228b103d4\"" Sep 12 17:42:45.011741 containerd[1466]: time="2025-09-12T17:42:45.008688333Z" level=info msg="StartContainer for \"334c9c9a2365cd8c6bb0bae334e2e4ad265e8ed421d76a01634d140228b103d4\"" Sep 12 17:42:45.064152 systemd[1]: run-containerd-runc-k8s.io-334c9c9a2365cd8c6bb0bae334e2e4ad265e8ed421d76a01634d140228b103d4-runc.WddZR5.mount: Deactivated successfully. Sep 12 17:42:45.075334 systemd[1]: Started cri-containerd-334c9c9a2365cd8c6bb0bae334e2e4ad265e8ed421d76a01634d140228b103d4.scope - libcontainer container 334c9c9a2365cd8c6bb0bae334e2e4ad265e8ed421d76a01634d140228b103d4. Sep 12 17:42:45.136581 containerd[1466]: time="2025-09-12T17:42:45.136526100Z" level=info msg="StartContainer for \"334c9c9a2365cd8c6bb0bae334e2e4ad265e8ed421d76a01634d140228b103d4\" returns successfully" Sep 12 17:42:45.140266 containerd[1466]: time="2025-09-12T17:42:45.140151527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:42:45.576484 systemd-networkd[1383]: calic2abce5daa0: Gained IPv6LL Sep 12 17:42:46.344934 systemd-networkd[1383]: vxlan.calico: Gained IPv6LL Sep 12 17:42:46.897164 kubelet[2632]: I0912 17:42:46.897040 2632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:42:47.423962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount609702302.mount: Deactivated successfully. Sep 12 17:42:47.446027 containerd[1466]: time="2025-09-12T17:42:47.445955941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:47.447469 containerd[1466]: time="2025-09-12T17:42:47.447395633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:42:47.448869 containerd[1466]: time="2025-09-12T17:42:47.448454782Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:47.451721 containerd[1466]: time="2025-09-12T17:42:47.451681134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:47.452715 containerd[1466]: time="2025-09-12T17:42:47.452668591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.312470848s" Sep 12 17:42:47.452815 containerd[1466]: time="2025-09-12T17:42:47.452720626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:42:47.459543 containerd[1466]: time="2025-09-12T17:42:47.459506131Z" level=info msg="CreateContainer within sandbox \"e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:42:47.485417 containerd[1466]: time="2025-09-12T17:42:47.483008283Z" level=info msg="CreateContainer within sandbox \"e2ec091153c1928b7e08018c569d3ac298ce1d6759eaa22bb3460c7ccef12249\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4bd4ed1b709a5020bc847d1511b5a64b4228678bb79095d04eef7bff6c03baba\"" Sep 12 17:42:47.485417 containerd[1466]: time="2025-09-12T17:42:47.484151777Z" level=info msg="StartContainer for \"4bd4ed1b709a5020bc847d1511b5a64b4228678bb79095d04eef7bff6c03baba\"" Sep 12 17:42:47.533714 systemd[1]: Started cri-containerd-4bd4ed1b709a5020bc847d1511b5a64b4228678bb79095d04eef7bff6c03baba.scope - libcontainer container 4bd4ed1b709a5020bc847d1511b5a64b4228678bb79095d04eef7bff6c03baba. Sep 12 17:42:47.597169 containerd[1466]: time="2025-09-12T17:42:47.597081161Z" level=info msg="StartContainer for \"4bd4ed1b709a5020bc847d1511b5a64b4228678bb79095d04eef7bff6c03baba\" returns successfully" Sep 12 17:42:47.613371 containerd[1466]: time="2025-09-12T17:42:47.612881612Z" level=info msg="StopPodSandbox for \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\"" Sep 12 17:42:47.613740 containerd[1466]: time="2025-09-12T17:42:47.612881592Z" level=info msg="StopPodSandbox for \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\"" Sep 12 17:42:47.786674 containerd[1466]: 2025-09-12 17:42:47.722 [INFO][4282] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Sep 12 17:42:47.786674 containerd[1466]: 2025-09-12 17:42:47.722 [INFO][4282] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" iface="eth0" netns="/var/run/netns/cni-4cbd6cb0-38c3-ee1e-bb6c-22df92932a9f" Sep 12 17:42:47.786674 containerd[1466]: 2025-09-12 17:42:47.723 [INFO][4282] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" iface="eth0" netns="/var/run/netns/cni-4cbd6cb0-38c3-ee1e-bb6c-22df92932a9f" Sep 12 17:42:47.786674 containerd[1466]: 2025-09-12 17:42:47.724 [INFO][4282] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" iface="eth0" netns="/var/run/netns/cni-4cbd6cb0-38c3-ee1e-bb6c-22df92932a9f" Sep 12 17:42:47.786674 containerd[1466]: 2025-09-12 17:42:47.724 [INFO][4282] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Sep 12 17:42:47.786674 containerd[1466]: 2025-09-12 17:42:47.724 [INFO][4282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Sep 12 17:42:47.786674 containerd[1466]: 2025-09-12 17:42:47.765 [INFO][4303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" HandleID="k8s-pod-network.a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:42:47.786674 containerd[1466]: 2025-09-12 17:42:47.766 [INFO][4303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:47.786674 containerd[1466]: 2025-09-12 17:42:47.766 [INFO][4303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:47.786674 containerd[1466]: 2025-09-12 17:42:47.777 [WARNING][4303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" HandleID="k8s-pod-network.a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:42:47.786674 containerd[1466]: 2025-09-12 17:42:47.778 [INFO][4303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" HandleID="k8s-pod-network.a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:42:47.786674 containerd[1466]: 2025-09-12 17:42:47.780 [INFO][4303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:47.786674 containerd[1466]: 2025-09-12 17:42:47.784 [INFO][4282] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Sep 12 17:42:47.790612 containerd[1466]: time="2025-09-12T17:42:47.789424395Z" level=info msg="TearDown network for sandbox \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\" successfully" Sep 12 17:42:47.790612 containerd[1466]: time="2025-09-12T17:42:47.790411440Z" level=info msg="StopPodSandbox for \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\" returns successfully" Sep 12 17:42:47.794009 containerd[1466]: time="2025-09-12T17:42:47.793233139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jcz47,Uid:6f422465-d46f-4508-ba40-fe8c850f3aa6,Namespace:calico-system,Attempt:1,}" Sep 12 17:42:47.796692 systemd[1]: run-netns-cni\x2d4cbd6cb0\x2d38c3\x2dee1e\x2dbb6c\x2d22df92932a9f.mount: Deactivated successfully. Sep 12 17:42:47.807492 containerd[1466]: 2025-09-12 17:42:47.721 [INFO][4286] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Sep 12 17:42:47.807492 containerd[1466]: 2025-09-12 17:42:47.722 [INFO][4286] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" iface="eth0" netns="/var/run/netns/cni-ffc974a2-8423-07c8-469f-7429473dac67" Sep 12 17:42:47.807492 containerd[1466]: 2025-09-12 17:42:47.724 [INFO][4286] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" iface="eth0" netns="/var/run/netns/cni-ffc974a2-8423-07c8-469f-7429473dac67" Sep 12 17:42:47.807492 containerd[1466]: 2025-09-12 17:42:47.724 [INFO][4286] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" iface="eth0" netns="/var/run/netns/cni-ffc974a2-8423-07c8-469f-7429473dac67" Sep 12 17:42:47.807492 containerd[1466]: 2025-09-12 17:42:47.724 [INFO][4286] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Sep 12 17:42:47.807492 containerd[1466]: 2025-09-12 17:42:47.724 [INFO][4286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Sep 12 17:42:47.807492 containerd[1466]: 2025-09-12 17:42:47.765 [INFO][4301] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" HandleID="k8s-pod-network.f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:42:47.807492 containerd[1466]: 2025-09-12 17:42:47.766 [INFO][4301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:47.807492 containerd[1466]: 2025-09-12 17:42:47.780 [INFO][4301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:47.807492 containerd[1466]: 2025-09-12 17:42:47.799 [WARNING][4301] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" HandleID="k8s-pod-network.f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:42:47.807492 containerd[1466]: 2025-09-12 17:42:47.800 [INFO][4301] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" HandleID="k8s-pod-network.f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:42:47.807492 containerd[1466]: 2025-09-12 17:42:47.802 [INFO][4301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:47.807492 containerd[1466]: 2025-09-12 17:42:47.805 [INFO][4286] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Sep 12 17:42:47.808872 containerd[1466]: time="2025-09-12T17:42:47.807630071Z" level=info msg="TearDown network for sandbox \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\" successfully" Sep 12 17:42:47.808872 containerd[1466]: time="2025-09-12T17:42:47.807663798Z" level=info msg="StopPodSandbox for \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\" returns successfully" Sep 12 17:42:47.808872 containerd[1466]: time="2025-09-12T17:42:47.808751524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd4c4dbff-cp27h,Uid:183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:42:47.927120 kubelet[2632]: I0912 17:42:47.925725 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6d8c5bf89b-6wwsz" podStartSLOduration=2.291795512 podStartE2EDuration="5.925699613s" podCreationTimestamp="2025-09-12 17:42:42 +0000 UTC" firstStartedPulling="2025-09-12 17:42:43.820054935 +0000 UTC m=+41.415698029" lastFinishedPulling="2025-09-12 17:42:47.453959034 +0000 UTC m=+45.049602130" observedRunningTime="2025-09-12 17:42:47.918802418 +0000 UTC m=+45.514445540" watchObservedRunningTime="2025-09-12 17:42:47.925699613 +0000 UTC m=+45.521342731" Sep 12 17:42:48.027915 systemd-networkd[1383]: cali347dd06e984: Link UP Sep 12 17:42:48.030071 systemd-networkd[1383]: cali347dd06e984: Gained carrier Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:47.882 [INFO][4314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0 csi-node-driver- calico-system 6f422465-d46f-4508-ba40-fe8c850f3aa6 956 0 2025-09-12 17:42:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal csi-node-driver-jcz47 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali347dd06e984 [] [] }} ContainerID="6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" Namespace="calico-system" Pod="csi-node-driver-jcz47" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-" Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:47.883 [INFO][4314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" Namespace="calico-system" Pod="csi-node-driver-jcz47" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:47.952 [INFO][4337] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" HandleID="k8s-pod-network.6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:47.953 [INFO][4337] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" HandleID="k8s-pod-network.6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", "pod":"csi-node-driver-jcz47", "timestamp":"2025-09-12 17:42:47.9510715 +0000 UTC"}, Hostname:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:47.953 [INFO][4337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:47.954 [INFO][4337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:47.954 [INFO][4337] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal' Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:47.972 [INFO][4337] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:47.978 [INFO][4337] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:47.984 [INFO][4337] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:47.989 [INFO][4337] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:47.993 [INFO][4337] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:47.993 [INFO][4337] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:47.996 [INFO][4337] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151 Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:48.005 [INFO][4337] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:48.014 [INFO][4337] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.194/26] block=192.168.86.192/26 handle="k8s-pod-network.6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:48.014 [INFO][4337] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.194/26] handle="k8s-pod-network.6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:48.014 [INFO][4337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:48.055968 containerd[1466]: 2025-09-12 17:42:48.014 [INFO][4337] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.194/26] IPv6=[] ContainerID="6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" HandleID="k8s-pod-network.6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:42:48.057906 containerd[1466]: 2025-09-12 17:42:48.019 [INFO][4314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" Namespace="calico-system" Pod="csi-node-driver-jcz47" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f422465-d46f-4508-ba40-fe8c850f3aa6", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-jcz47", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali347dd06e984", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:48.057906 containerd[1466]: 2025-09-12 17:42:48.019 [INFO][4314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.194/32] ContainerID="6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" Namespace="calico-system" Pod="csi-node-driver-jcz47" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:42:48.057906 containerd[1466]: 2025-09-12 17:42:48.019 [INFO][4314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali347dd06e984 ContainerID="6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" Namespace="calico-system" Pod="csi-node-driver-jcz47" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:42:48.057906 containerd[1466]: 2025-09-12 17:42:48.033 [INFO][4314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" Namespace="calico-system" Pod="csi-node-driver-jcz47" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:42:48.057906 containerd[1466]: 2025-09-12 17:42:48.034 [INFO][4314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" Namespace="calico-system" Pod="csi-node-driver-jcz47" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f422465-d46f-4508-ba40-fe8c850f3aa6", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151", Pod:"csi-node-driver-jcz47", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali347dd06e984", MAC:"3a:f2:bb:af:c5:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:48.057906 containerd[1466]: 2025-09-12 17:42:48.053 [INFO][4314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151" Namespace="calico-system" Pod="csi-node-driver-jcz47" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:42:48.107131 containerd[1466]: time="2025-09-12T17:42:48.106838842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:48.107131 containerd[1466]: time="2025-09-12T17:42:48.106968139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:48.108545 containerd[1466]: time="2025-09-12T17:42:48.107598909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:48.109570 containerd[1466]: time="2025-09-12T17:42:48.109014926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:48.139150 systemd-networkd[1383]: cali95c680a31f0: Link UP Sep 12 17:42:48.141828 systemd-networkd[1383]: cali95c680a31f0: Gained carrier Sep 12 17:42:48.154361 systemd[1]: Started cri-containerd-6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151.scope - libcontainer container 6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151. Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:47.949 [INFO][4324] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0 calico-apiserver-fd4c4dbff- calico-apiserver 183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d 957 0 2025-09-12 17:42:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fd4c4dbff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal calico-apiserver-fd4c4dbff-cp27h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali95c680a31f0 [] [] }} ContainerID="928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-cp27h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-" Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:47.951 [INFO][4324] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-cp27h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.016 [INFO][4346] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" HandleID="k8s-pod-network.928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.016 [INFO][4346] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" HandleID="k8s-pod-network.928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003206a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", "pod":"calico-apiserver-fd4c4dbff-cp27h", "timestamp":"2025-09-12 17:42:48.016357344 +0000 UTC"}, Hostname:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.016 [INFO][4346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.016 [INFO][4346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.016 [INFO][4346] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal' Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.072 [INFO][4346] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.088 [INFO][4346] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.097 [INFO][4346] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.100 [INFO][4346] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.103 [INFO][4346] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.104 [INFO][4346] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.105 [INFO][4346] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.112 [INFO][4346] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.123 [INFO][4346] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.195/26] block=192.168.86.192/26 handle="k8s-pod-network.928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.123 [INFO][4346] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.195/26] handle="k8s-pod-network.928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.123 [INFO][4346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:48.171571 containerd[1466]: 2025-09-12 17:42:48.123 [INFO][4346] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.195/26] IPv6=[] ContainerID="928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" HandleID="k8s-pod-network.928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:42:48.172839 containerd[1466]: 2025-09-12 17:42:48.129 [INFO][4324] cni-plugin/k8s.go 418: Populated endpoint ContainerID="928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-cp27h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0", GenerateName:"calico-apiserver-fd4c4dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fd4c4dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-fd4c4dbff-cp27h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95c680a31f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:48.172839 containerd[1466]: 2025-09-12 17:42:48.131 [INFO][4324] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.195/32] ContainerID="928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-cp27h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:42:48.172839 containerd[1466]: 2025-09-12 17:42:48.131 [INFO][4324] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95c680a31f0 ContainerID="928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-cp27h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:42:48.172839 containerd[1466]: 2025-09-12 17:42:48.143 [INFO][4324] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-cp27h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:42:48.172839 containerd[1466]: 2025-09-12 17:42:48.143 [INFO][4324] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-cp27h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0", GenerateName:"calico-apiserver-fd4c4dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fd4c4dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f", Pod:"calico-apiserver-fd4c4dbff-cp27h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95c680a31f0", MAC:"1a:f5:32:3e:11:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:48.172839 containerd[1466]: 2025-09-12 17:42:48.166 [INFO][4324] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-cp27h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:42:48.234487 containerd[1466]: time="2025-09-12T17:42:48.233911432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:48.234487 containerd[1466]: time="2025-09-12T17:42:48.234058789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:48.234487 containerd[1466]: time="2025-09-12T17:42:48.234116069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:48.234487 containerd[1466]: time="2025-09-12T17:42:48.234284701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:48.238862 containerd[1466]: time="2025-09-12T17:42:48.238816068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jcz47,Uid:6f422465-d46f-4508-ba40-fe8c850f3aa6,Namespace:calico-system,Attempt:1,} returns sandbox id \"6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151\"" Sep 12 17:42:48.243229 containerd[1466]: time="2025-09-12T17:42:48.243190127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:42:48.272192 systemd[1]: Started cri-containerd-928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f.scope - libcontainer container 928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f. Sep 12 17:42:48.335882 containerd[1466]: time="2025-09-12T17:42:48.335754478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd4c4dbff-cp27h,Uid:183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f\"" Sep 12 17:42:48.430911 systemd[1]: run-netns-cni\x2dffc974a2\x2d8423\x2d07c8\x2d469f\x2d7429473dac67.mount: Deactivated successfully. Sep 12 17:42:48.616585 containerd[1466]: time="2025-09-12T17:42:48.614393739Z" level=info msg="StopPodSandbox for \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\"" Sep 12 17:42:48.621999 containerd[1466]: time="2025-09-12T17:42:48.620173335Z" level=info msg="StopPodSandbox for \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\"" Sep 12 17:42:48.624012 containerd[1466]: time="2025-09-12T17:42:48.623416779Z" level=info msg="StopPodSandbox for \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\"" Sep 12 17:42:48.928242 containerd[1466]: 2025-09-12 17:42:48.794 [INFO][4480] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Sep 12 17:42:48.928242 containerd[1466]: 2025-09-12 17:42:48.794 [INFO][4480] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" iface="eth0" netns="/var/run/netns/cni-c3a6d904-bfb1-9446-bc39-5c5a306fabda" Sep 12 17:42:48.928242 containerd[1466]: 2025-09-12 17:42:48.795 [INFO][4480] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" iface="eth0" netns="/var/run/netns/cni-c3a6d904-bfb1-9446-bc39-5c5a306fabda" Sep 12 17:42:48.928242 containerd[1466]: 2025-09-12 17:42:48.795 [INFO][4480] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" iface="eth0" netns="/var/run/netns/cni-c3a6d904-bfb1-9446-bc39-5c5a306fabda" Sep 12 17:42:48.928242 containerd[1466]: 2025-09-12 17:42:48.795 [INFO][4480] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Sep 12 17:42:48.928242 containerd[1466]: 2025-09-12 17:42:48.795 [INFO][4480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Sep 12 17:42:48.928242 containerd[1466]: 2025-09-12 17:42:48.890 [INFO][4504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" HandleID="k8s-pod-network.0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:42:48.928242 containerd[1466]: 2025-09-12 17:42:48.890 [INFO][4504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:48.928242 containerd[1466]: 2025-09-12 17:42:48.890 [INFO][4504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:48.928242 containerd[1466]: 2025-09-12 17:42:48.914 [WARNING][4504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" HandleID="k8s-pod-network.0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:42:48.928242 containerd[1466]: 2025-09-12 17:42:48.914 [INFO][4504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" HandleID="k8s-pod-network.0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:42:48.928242 containerd[1466]: 2025-09-12 17:42:48.916 [INFO][4504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:48.928242 containerd[1466]: 2025-09-12 17:42:48.918 [INFO][4480] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Sep 12 17:42:48.928242 containerd[1466]: time="2025-09-12T17:42:48.926507331Z" level=info msg="TearDown network for sandbox \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\" successfully" Sep 12 17:42:48.928242 containerd[1466]: time="2025-09-12T17:42:48.926673413Z" level=info msg="StopPodSandbox for \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\" returns successfully" Sep 12 17:42:48.928242 containerd[1466]: time="2025-09-12T17:42:48.927907082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd4c4dbff-4fwwl,Uid:1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:42:48.940042 systemd[1]: run-netns-cni\x2dc3a6d904\x2dbfb1\x2d9446\x2dbc39\x2d5c5a306fabda.mount: Deactivated successfully. Sep 12 17:42:48.984124 containerd[1466]: 2025-09-12 17:42:48.797 [INFO][4484] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Sep 12 17:42:48.984124 containerd[1466]: 2025-09-12 17:42:48.798 [INFO][4484] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" iface="eth0" netns="/var/run/netns/cni-94508539-5f1a-de58-dc3f-ae7914e85cde" Sep 12 17:42:48.984124 containerd[1466]: 2025-09-12 17:42:48.798 [INFO][4484] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" iface="eth0" netns="/var/run/netns/cni-94508539-5f1a-de58-dc3f-ae7914e85cde" Sep 12 17:42:48.984124 containerd[1466]: 2025-09-12 17:42:48.799 [INFO][4484] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" iface="eth0" netns="/var/run/netns/cni-94508539-5f1a-de58-dc3f-ae7914e85cde" Sep 12 17:42:48.984124 containerd[1466]: 2025-09-12 17:42:48.799 [INFO][4484] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Sep 12 17:42:48.984124 containerd[1466]: 2025-09-12 17:42:48.799 [INFO][4484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Sep 12 17:42:48.984124 containerd[1466]: 2025-09-12 17:42:48.922 [INFO][4506] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" HandleID="k8s-pod-network.5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:42:48.984124 containerd[1466]: 2025-09-12 17:42:48.922 [INFO][4506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:48.984124 containerd[1466]: 2025-09-12 17:42:48.922 [INFO][4506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:48.984124 containerd[1466]: 2025-09-12 17:42:48.958 [WARNING][4506] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" HandleID="k8s-pod-network.5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:42:48.984124 containerd[1466]: 2025-09-12 17:42:48.958 [INFO][4506] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" HandleID="k8s-pod-network.5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:42:48.984124 containerd[1466]: 2025-09-12 17:42:48.963 [INFO][4506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:48.984124 containerd[1466]: 2025-09-12 17:42:48.972 [INFO][4484] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Sep 12 17:42:48.989202 containerd[1466]: time="2025-09-12T17:42:48.989155199Z" level=info msg="TearDown network for sandbox \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\" successfully" Sep 12 17:42:48.994107 containerd[1466]: time="2025-09-12T17:42:48.991217566Z" level=info msg="StopPodSandbox for \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\" returns successfully" Sep 12 17:42:49.001123 containerd[1466]: time="2025-09-12T17:42:48.999328385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ncq8h,Uid:07aac094-4524-4cf9-bf29-6a01c3a02371,Namespace:calico-system,Attempt:1,}" Sep 12 17:42:49.000220 systemd[1]: run-netns-cni\x2d94508539\x2d5f1a\x2dde58\x2ddc3f\x2dae7914e85cde.mount: Deactivated successfully. Sep 12 17:42:49.026210 containerd[1466]: 2025-09-12 17:42:48.844 [INFO][4492] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Sep 12 17:42:49.026210 containerd[1466]: 2025-09-12 17:42:48.845 [INFO][4492] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" iface="eth0" netns="/var/run/netns/cni-76f3c195-5a3d-ba91-5300-c57ec525be89" Sep 12 17:42:49.026210 containerd[1466]: 2025-09-12 17:42:48.847 [INFO][4492] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" iface="eth0" netns="/var/run/netns/cni-76f3c195-5a3d-ba91-5300-c57ec525be89" Sep 12 17:42:49.026210 containerd[1466]: 2025-09-12 17:42:48.850 [INFO][4492] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" iface="eth0" netns="/var/run/netns/cni-76f3c195-5a3d-ba91-5300-c57ec525be89" Sep 12 17:42:49.026210 containerd[1466]: 2025-09-12 17:42:48.850 [INFO][4492] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Sep 12 17:42:49.026210 containerd[1466]: 2025-09-12 17:42:48.851 [INFO][4492] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Sep 12 17:42:49.026210 containerd[1466]: 2025-09-12 17:42:48.977 [INFO][4516] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" HandleID="k8s-pod-network.0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:42:49.026210 containerd[1466]: 2025-09-12 17:42:48.979 [INFO][4516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:49.026210 containerd[1466]: 2025-09-12 17:42:48.980 [INFO][4516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:49.026210 containerd[1466]: 2025-09-12 17:42:49.006 [WARNING][4516] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" HandleID="k8s-pod-network.0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:42:49.026210 containerd[1466]: 2025-09-12 17:42:49.006 [INFO][4516] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" HandleID="k8s-pod-network.0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:42:49.026210 containerd[1466]: 2025-09-12 17:42:49.013 [INFO][4516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:49.026210 containerd[1466]: 2025-09-12 17:42:49.019 [INFO][4492] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Sep 12 17:42:49.027643 containerd[1466]: time="2025-09-12T17:42:49.026357186Z" level=info msg="TearDown network for sandbox \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\" successfully" Sep 12 17:42:49.027643 containerd[1466]: time="2025-09-12T17:42:49.026387331Z" level=info msg="StopPodSandbox for \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\" returns successfully" Sep 12 17:42:49.028535 containerd[1466]: time="2025-09-12T17:42:49.027790082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fdfb6d66f-lczz9,Uid:3ca1b676-84c9-4bc8-ae1d-208eeead353b,Namespace:calico-system,Attempt:1,}" Sep 12 17:42:49.034942 systemd[1]: run-netns-cni\x2d76f3c195\x2d5a3d\x2dba91\x2d5300\x2dc57ec525be89.mount: Deactivated successfully. Sep 12 17:42:49.225385 systemd-networkd[1383]: cali347dd06e984: Gained IPv6LL Sep 12 17:42:49.422594 systemd-networkd[1383]: calia5ff398eef7: Link UP Sep 12 17:42:49.426399 systemd-networkd[1383]: calia5ff398eef7: Gained carrier Sep 12 17:42:49.480744 systemd-networkd[1383]: cali95c680a31f0: Gained IPv6LL Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.126 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0 calico-apiserver-fd4c4dbff- calico-apiserver 1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d 976 0 2025-09-12 17:42:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fd4c4dbff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal calico-apiserver-fd4c4dbff-4fwwl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia5ff398eef7 [] [] }} ContainerID="89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-4fwwl" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-" Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.127 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-4fwwl" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.281 [INFO][4561] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" HandleID="k8s-pod-network.89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.282 [INFO][4561] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" HandleID="k8s-pod-network.89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122580), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", "pod":"calico-apiserver-fd4c4dbff-4fwwl", "timestamp":"2025-09-12 17:42:49.280037068 +0000 UTC"}, Hostname:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.282 [INFO][4561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.283 [INFO][4561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.283 [INFO][4561] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal' Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.311 [INFO][4561] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.322 [INFO][4561] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.340 [INFO][4561] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.347 [INFO][4561] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.354 [INFO][4561] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.354 [INFO][4561] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.359 [INFO][4561] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.377 [INFO][4561] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.405 [INFO][4561] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.196/26] block=192.168.86.192/26 handle="k8s-pod-network.89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.405 [INFO][4561] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.196/26] handle="k8s-pod-network.89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.406 [INFO][4561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:49.495962 containerd[1466]: 2025-09-12 17:42:49.406 [INFO][4561] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.196/26] IPv6=[] ContainerID="89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" HandleID="k8s-pod-network.89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:42:49.498858 containerd[1466]: 2025-09-12 17:42:49.410 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-4fwwl" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0", GenerateName:"calico-apiserver-fd4c4dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fd4c4dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-fd4c4dbff-4fwwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia5ff398eef7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:49.498858 containerd[1466]: 2025-09-12 17:42:49.411 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.196/32] ContainerID="89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-4fwwl" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:42:49.498858 containerd[1466]: 2025-09-12 17:42:49.412 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5ff398eef7 ContainerID="89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-4fwwl" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:42:49.498858 containerd[1466]: 2025-09-12 17:42:49.435 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-4fwwl" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:42:49.498858 containerd[1466]: 2025-09-12 17:42:49.442 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-4fwwl" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0", GenerateName:"calico-apiserver-fd4c4dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fd4c4dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf", Pod:"calico-apiserver-fd4c4dbff-4fwwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia5ff398eef7", MAC:"66:3d:5e:78:ca:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:49.498858 containerd[1466]: 2025-09-12 17:42:49.488 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf" Namespace="calico-apiserver" Pod="calico-apiserver-fd4c4dbff-4fwwl" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:42:49.583139 containerd[1466]: time="2025-09-12T17:42:49.582659094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:49.583139 containerd[1466]: time="2025-09-12T17:42:49.582814709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:49.583139 containerd[1466]: time="2025-09-12T17:42:49.582865135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:49.584026 containerd[1466]: time="2025-09-12T17:42:49.583673242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:49.613416 containerd[1466]: time="2025-09-12T17:42:49.612198985Z" level=info msg="StopPodSandbox for \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\"" Sep 12 17:42:49.614756 containerd[1466]: time="2025-09-12T17:42:49.614716125Z" level=info msg="StopPodSandbox for \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\"" Sep 12 17:42:49.645466 systemd[1]: Started cri-containerd-89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf.scope - libcontainer container 89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf. Sep 12 17:42:49.747631 systemd-networkd[1383]: calif0475a66a71: Link UP Sep 12 17:42:49.760934 systemd-networkd[1383]: calif0475a66a71: Gained carrier Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.153 [INFO][4536] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0 goldmane-54d579b49d- calico-system 07aac094-4524-4cf9-bf29-6a01c3a02371 977 0 2025-09-12 17:42:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal goldmane-54d579b49d-ncq8h eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif0475a66a71 [] [] }} ContainerID="3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" Namespace="calico-system" Pod="goldmane-54d579b49d-ncq8h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-" Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.153 [INFO][4536] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" Namespace="calico-system" Pod="goldmane-54d579b49d-ncq8h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.317 [INFO][4567] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" HandleID="k8s-pod-network.3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.317 [INFO][4567] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" HandleID="k8s-pod-network.3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d8960), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", "pod":"goldmane-54d579b49d-ncq8h", "timestamp":"2025-09-12 17:42:49.317177664 +0000 UTC"}, Hostname:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.317 [INFO][4567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.409 [INFO][4567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.409 [INFO][4567] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal' Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.440 [INFO][4567] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.489 [INFO][4567] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.508 [INFO][4567] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.514 [INFO][4567] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.528 [INFO][4567] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.530 [INFO][4567] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.549 [INFO][4567] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.577 [INFO][4567] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.687 [INFO][4567] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.197/26] block=192.168.86.192/26 handle="k8s-pod-network.3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.687 [INFO][4567] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.197/26] handle="k8s-pod-network.3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.687 [INFO][4567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:49.835697 containerd[1466]: 2025-09-12 17:42:49.687 [INFO][4567] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.197/26] IPv6=[] ContainerID="3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" HandleID="k8s-pod-network.3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:42:49.837660 containerd[1466]: 2025-09-12 17:42:49.717 [INFO][4536] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" Namespace="calico-system" Pod="goldmane-54d579b49d-ncq8h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"07aac094-4524-4cf9-bf29-6a01c3a02371", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-54d579b49d-ncq8h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.86.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0475a66a71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:49.837660 containerd[1466]: 2025-09-12 17:42:49.718 [INFO][4536] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.197/32] ContainerID="3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" Namespace="calico-system" Pod="goldmane-54d579b49d-ncq8h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:42:49.837660 containerd[1466]: 2025-09-12 17:42:49.718 [INFO][4536] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0475a66a71 ContainerID="3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" Namespace="calico-system" Pod="goldmane-54d579b49d-ncq8h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:42:49.837660 containerd[1466]: 2025-09-12 17:42:49.771 [INFO][4536] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" Namespace="calico-system" Pod="goldmane-54d579b49d-ncq8h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:42:49.837660 containerd[1466]: 2025-09-12 17:42:49.773 [INFO][4536] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" Namespace="calico-system" Pod="goldmane-54d579b49d-ncq8h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"07aac094-4524-4cf9-bf29-6a01c3a02371", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc", Pod:"goldmane-54d579b49d-ncq8h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.86.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0475a66a71", MAC:"d2:c3:65:91:a3:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:49.837660 containerd[1466]: 2025-09-12 17:42:49.821 [INFO][4536] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc" Namespace="calico-system" Pod="goldmane-54d579b49d-ncq8h" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:42:49.932249 systemd-networkd[1383]: cali9836997e06b: Link UP Sep 12 17:42:49.943187 systemd-networkd[1383]: cali9836997e06b: Gained carrier Sep 12 17:42:49.995398 containerd[1466]: time="2025-09-12T17:42:49.995289388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:49.996162 containerd[1466]: time="2025-09-12T17:42:49.995588099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:49.996162 containerd[1466]: time="2025-09-12T17:42:49.995623643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:49.996162 containerd[1466]: time="2025-09-12T17:42:49.995757837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.217 [INFO][4548] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0 calico-kube-controllers-6fdfb6d66f- calico-system 3ca1b676-84c9-4bc8-ae1d-208eeead353b 978 0 2025-09-12 17:42:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fdfb6d66f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal calico-kube-controllers-6fdfb6d66f-lczz9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9836997e06b [] [] }} ContainerID="bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" Namespace="calico-system" Pod="calico-kube-controllers-6fdfb6d66f-lczz9" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-" Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.220 [INFO][4548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" Namespace="calico-system" Pod="calico-kube-controllers-6fdfb6d66f-lczz9" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.326 [INFO][4572] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" HandleID="k8s-pod-network.bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.326 [INFO][4572] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" HandleID="k8s-pod-network.bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00028d8d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", "pod":"calico-kube-controllers-6fdfb6d66f-lczz9", "timestamp":"2025-09-12 17:42:49.326768172 +0000 UTC"}, Hostname:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.327 [INFO][4572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.689 [INFO][4572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.689 [INFO][4572] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal' Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.731 [INFO][4572] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.770 [INFO][4572] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.814 [INFO][4572] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.826 [INFO][4572] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.834 [INFO][4572] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.834 [INFO][4572] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.841 [INFO][4572] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129 Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.860 [INFO][4572] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.908 [INFO][4572] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.198/26] block=192.168.86.192/26 handle="k8s-pod-network.bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.908 [INFO][4572] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.198/26] handle="k8s-pod-network.bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.908 [INFO][4572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:50.022428 containerd[1466]: 2025-09-12 17:42:49.908 [INFO][4572] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.198/26] IPv6=[] ContainerID="bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" HandleID="k8s-pod-network.bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:42:50.026917 containerd[1466]: 2025-09-12 17:42:49.922 [INFO][4548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" Namespace="calico-system" Pod="calico-kube-controllers-6fdfb6d66f-lczz9" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0", GenerateName:"calico-kube-controllers-6fdfb6d66f-", Namespace:"calico-system", SelfLink:"", UID:"3ca1b676-84c9-4bc8-ae1d-208eeead353b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fdfb6d66f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-6fdfb6d66f-lczz9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9836997e06b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:50.026917 containerd[1466]: 2025-09-12 17:42:49.922 [INFO][4548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.198/32] ContainerID="bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" Namespace="calico-system" Pod="calico-kube-controllers-6fdfb6d66f-lczz9" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:42:50.026917 containerd[1466]: 2025-09-12 17:42:49.922 [INFO][4548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9836997e06b ContainerID="bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" Namespace="calico-system" Pod="calico-kube-controllers-6fdfb6d66f-lczz9" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:42:50.026917 containerd[1466]: 2025-09-12 17:42:49.951 [INFO][4548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" Namespace="calico-system" Pod="calico-kube-controllers-6fdfb6d66f-lczz9" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:42:50.026917 containerd[1466]: 2025-09-12 17:42:49.952 [INFO][4548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" Namespace="calico-system" Pod="calico-kube-controllers-6fdfb6d66f-lczz9" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0", GenerateName:"calico-kube-controllers-6fdfb6d66f-", Namespace:"calico-system", SelfLink:"", UID:"3ca1b676-84c9-4bc8-ae1d-208eeead353b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fdfb6d66f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129", Pod:"calico-kube-controllers-6fdfb6d66f-lczz9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9836997e06b", MAC:"52:1c:e8:6b:ce:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:50.026917 containerd[1466]: 2025-09-12 17:42:50.013 [INFO][4548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129" Namespace="calico-system" Pod="calico-kube-controllers-6fdfb6d66f-lczz9" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:42:50.096346 systemd[1]: Started cri-containerd-3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc.scope - libcontainer container 3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc. Sep 12 17:42:50.176169 containerd[1466]: 2025-09-12 17:42:50.100 [INFO][4641] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Sep 12 17:42:50.176169 containerd[1466]: 2025-09-12 17:42:50.100 [INFO][4641] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" iface="eth0" netns="/var/run/netns/cni-1da3194b-f306-46fe-2d63-5b061eadc666" Sep 12 17:42:50.176169 containerd[1466]: 2025-09-12 17:42:50.102 [INFO][4641] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" iface="eth0" netns="/var/run/netns/cni-1da3194b-f306-46fe-2d63-5b061eadc666" Sep 12 17:42:50.176169 containerd[1466]: 2025-09-12 17:42:50.103 [INFO][4641] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" iface="eth0" netns="/var/run/netns/cni-1da3194b-f306-46fe-2d63-5b061eadc666" Sep 12 17:42:50.176169 containerd[1466]: 2025-09-12 17:42:50.103 [INFO][4641] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Sep 12 17:42:50.176169 containerd[1466]: 2025-09-12 17:42:50.103 [INFO][4641] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Sep 12 17:42:50.176169 containerd[1466]: 2025-09-12 17:42:50.156 [INFO][4709] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" HandleID="k8s-pod-network.2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:42:50.176169 containerd[1466]: 2025-09-12 17:42:50.157 [INFO][4709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:50.176169 containerd[1466]: 2025-09-12 17:42:50.157 [INFO][4709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:50.176169 containerd[1466]: 2025-09-12 17:42:50.168 [WARNING][4709] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" HandleID="k8s-pod-network.2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:42:50.176169 containerd[1466]: 2025-09-12 17:42:50.168 [INFO][4709] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" HandleID="k8s-pod-network.2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:42:50.176169 containerd[1466]: 2025-09-12 17:42:50.171 [INFO][4709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:50.176169 containerd[1466]: 2025-09-12 17:42:50.173 [INFO][4641] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Sep 12 17:42:50.176934 containerd[1466]: time="2025-09-12T17:42:50.176267890Z" level=info msg="TearDown network for sandbox \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\" successfully" Sep 12 17:42:50.176934 containerd[1466]: time="2025-09-12T17:42:50.176302829Z" level=info msg="StopPodSandbox for \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\" returns successfully" Sep 12 17:42:50.179125 containerd[1466]: time="2025-09-12T17:42:50.177192056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8l2m5,Uid:8cf3bd71-c352-4c33-a791-2ad40156deb1,Namespace:kube-system,Attempt:1,}" Sep 12 17:42:50.211149 containerd[1466]: time="2025-09-12T17:42:50.209868604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:50.211149 containerd[1466]: time="2025-09-12T17:42:50.209950071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:50.216244 containerd[1466]: time="2025-09-12T17:42:50.215801266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:50.216244 containerd[1466]: time="2025-09-12T17:42:50.215974037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:50.305326 systemd[1]: Started cri-containerd-bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129.scope - libcontainer container bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129. Sep 12 17:42:50.354978 containerd[1466]: 2025-09-12 17:42:50.025 [INFO][4640] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Sep 12 17:42:50.354978 containerd[1466]: 2025-09-12 17:42:50.028 [INFO][4640] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" iface="eth0" netns="/var/run/netns/cni-082b15fd-67c5-fd3b-e827-523561312aae" Sep 12 17:42:50.354978 containerd[1466]: 2025-09-12 17:42:50.031 [INFO][4640] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" iface="eth0" netns="/var/run/netns/cni-082b15fd-67c5-fd3b-e827-523561312aae" Sep 12 17:42:50.354978 containerd[1466]: 2025-09-12 17:42:50.034 [INFO][4640] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" iface="eth0" netns="/var/run/netns/cni-082b15fd-67c5-fd3b-e827-523561312aae" Sep 12 17:42:50.354978 containerd[1466]: 2025-09-12 17:42:50.034 [INFO][4640] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Sep 12 17:42:50.354978 containerd[1466]: 2025-09-12 17:42:50.034 [INFO][4640] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Sep 12 17:42:50.354978 containerd[1466]: 2025-09-12 17:42:50.279 [INFO][4686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" HandleID="k8s-pod-network.6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:42:50.354978 containerd[1466]: 2025-09-12 17:42:50.289 [INFO][4686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:50.354978 containerd[1466]: 2025-09-12 17:42:50.289 [INFO][4686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:50.354978 containerd[1466]: 2025-09-12 17:42:50.328 [WARNING][4686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" HandleID="k8s-pod-network.6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:42:50.354978 containerd[1466]: 2025-09-12 17:42:50.330 [INFO][4686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" HandleID="k8s-pod-network.6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:42:50.354978 containerd[1466]: 2025-09-12 17:42:50.337 [INFO][4686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:50.354978 containerd[1466]: 2025-09-12 17:42:50.348 [INFO][4640] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Sep 12 17:42:50.354978 containerd[1466]: time="2025-09-12T17:42:50.353882155Z" level=info msg="TearDown network for sandbox \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\" successfully" Sep 12 17:42:50.356158 containerd[1466]: time="2025-09-12T17:42:50.353919120Z" level=info msg="StopPodSandbox for \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\" returns successfully" Sep 12 17:42:50.358110 containerd[1466]: time="2025-09-12T17:42:50.356553520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nf9rd,Uid:1d23bfb6-ec70-4170-8790-d653e47690fd,Namespace:kube-system,Attempt:1,}" Sep 12 17:42:50.406038 containerd[1466]: time="2025-09-12T17:42:50.405979168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:50.414634 containerd[1466]: time="2025-09-12T17:42:50.414240265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:42:50.416418 containerd[1466]: time="2025-09-12T17:42:50.416371015Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:50.450527 systemd[1]: run-containerd-runc-k8s.io-3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc-runc.IiClJ2.mount: Deactivated successfully. Sep 12 17:42:50.450941 systemd[1]: run-netns-cni\x2d1da3194b\x2df306\x2d46fe\x2d2d63\x2d5b061eadc666.mount: Deactivated successfully. Sep 12 17:42:50.451077 systemd[1]: run-netns-cni\x2d082b15fd\x2d67c5\x2dfd3b\x2de827\x2d523561312aae.mount: Deactivated successfully. Sep 12 17:42:50.461572 containerd[1466]: time="2025-09-12T17:42:50.461519487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:50.468084 containerd[1466]: time="2025-09-12T17:42:50.467078223Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.223837521s" Sep 12 17:42:50.468084 containerd[1466]: time="2025-09-12T17:42:50.467900898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:42:50.478465 containerd[1466]: time="2025-09-12T17:42:50.477768345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:42:50.481497 containerd[1466]: time="2025-09-12T17:42:50.481227682Z" level=info msg="CreateContainer within sandbox \"6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:42:50.558813 containerd[1466]: time="2025-09-12T17:42:50.558669234Z" level=info msg="CreateContainer within sandbox \"6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e15216b8747746dc17b0863aa0f7c4b61b567dccfe37d225a86ade39018339e9\"" Sep 12 17:42:50.561837 containerd[1466]: time="2025-09-12T17:42:50.561676467Z" level=info msg="StartContainer for \"e15216b8747746dc17b0863aa0f7c4b61b567dccfe37d225a86ade39018339e9\"" Sep 12 17:42:50.714593 systemd[1]: run-containerd-runc-k8s.io-e15216b8747746dc17b0863aa0f7c4b61b567dccfe37d225a86ade39018339e9-runc.hrCFqa.mount: Deactivated successfully. Sep 12 17:42:50.729868 systemd[1]: Started cri-containerd-e15216b8747746dc17b0863aa0f7c4b61b567dccfe37d225a86ade39018339e9.scope - libcontainer container e15216b8747746dc17b0863aa0f7c4b61b567dccfe37d225a86ade39018339e9. Sep 12 17:42:50.764495 systemd-networkd[1383]: cali6261262fd0c: Link UP Sep 12 17:42:50.770136 systemd-networkd[1383]: cali6261262fd0c: Gained carrier Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.404 [INFO][4738] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0 coredns-674b8bbfcf- kube-system 8cf3bd71-c352-4c33-a791-2ad40156deb1 996 0 2025-09-12 17:42:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal coredns-674b8bbfcf-8l2m5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6261262fd0c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8l2m5" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-" Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.404 [INFO][4738] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8l2m5" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.605 [INFO][4777] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" HandleID="k8s-pod-network.965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.608 [INFO][4777] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" HandleID="k8s-pod-network.965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", "pod":"coredns-674b8bbfcf-8l2m5", "timestamp":"2025-09-12 17:42:50.592622931 +0000 UTC"}, Hostname:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.610 [INFO][4777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.610 [INFO][4777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.610 [INFO][4777] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal' Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.655 [INFO][4777] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.678 [INFO][4777] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.685 [INFO][4777] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.688 [INFO][4777] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.691 [INFO][4777] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.691 [INFO][4777] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.693 [INFO][4777] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4 Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.710 [INFO][4777] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.722 [INFO][4777] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.199/26] block=192.168.86.192/26 handle="k8s-pod-network.965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.722 [INFO][4777] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.199/26] handle="k8s-pod-network.965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.722 [INFO][4777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:50.823890 containerd[1466]: 2025-09-12 17:42:50.722 [INFO][4777] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.199/26] IPv6=[] ContainerID="965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" HandleID="k8s-pod-network.965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:42:50.826192 containerd[1466]: 2025-09-12 17:42:50.739 [INFO][4738] cni-plugin/k8s.go 418: Populated endpoint ContainerID="965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8l2m5" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8cf3bd71-c352-4c33-a791-2ad40156deb1", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-674b8bbfcf-8l2m5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6261262fd0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:50.826192 containerd[1466]: 2025-09-12 17:42:50.740 [INFO][4738] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.199/32] ContainerID="965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8l2m5" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:42:50.826192 containerd[1466]: 2025-09-12 17:42:50.741 [INFO][4738] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6261262fd0c ContainerID="965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8l2m5" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:42:50.826192 containerd[1466]: 2025-09-12 17:42:50.774 [INFO][4738] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8l2m5" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:42:50.826192 containerd[1466]: 2025-09-12 17:42:50.776 [INFO][4738] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8l2m5" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8cf3bd71-c352-4c33-a791-2ad40156deb1", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4", Pod:"coredns-674b8bbfcf-8l2m5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6261262fd0c", MAC:"da:94:97:0e:a9:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:50.826192 containerd[1466]: 2025-09-12 17:42:50.816 [INFO][4738] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8l2m5" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:42:50.869230 systemd-networkd[1383]: cali43b8d58c086: Link UP Sep 12 17:42:50.872304 systemd-networkd[1383]: cali43b8d58c086: Gained carrier Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.554 [INFO][4764] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0 coredns-674b8bbfcf- kube-system 1d23bfb6-ec70-4170-8790-d653e47690fd 994 0 2025-09-12 17:42:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal coredns-674b8bbfcf-nf9rd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali43b8d58c086 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" Namespace="kube-system" Pod="coredns-674b8bbfcf-nf9rd" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-" Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.554 [INFO][4764] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" Namespace="kube-system" Pod="coredns-674b8bbfcf-nf9rd" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.655 [INFO][4787] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" HandleID="k8s-pod-network.20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.656 [INFO][4787] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" HandleID="k8s-pod-network.20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036f270), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", "pod":"coredns-674b8bbfcf-nf9rd", "timestamp":"2025-09-12 17:42:50.65538643 +0000 UTC"}, Hostname:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.656 [INFO][4787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.725 [INFO][4787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.725 [INFO][4787] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal' Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.759 [INFO][4787] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.778 [INFO][4787] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.796 [INFO][4787] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.801 [INFO][4787] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.804 [INFO][4787] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.804 [INFO][4787] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.807 [INFO][4787] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.816 [INFO][4787] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.831 [INFO][4787] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.200/26] block=192.168.86.192/26 handle="k8s-pod-network.20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.832 [INFO][4787] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.200/26] handle="k8s-pod-network.20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" host="ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal" Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.832 [INFO][4787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:50.924721 containerd[1466]: 2025-09-12 17:42:50.832 [INFO][4787] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.200/26] IPv6=[] ContainerID="20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" HandleID="k8s-pod-network.20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:42:50.929278 containerd[1466]: 2025-09-12 17:42:50.851 [INFO][4764] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" Namespace="kube-system" Pod="coredns-674b8bbfcf-nf9rd" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1d23bfb6-ec70-4170-8790-d653e47690fd", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-674b8bbfcf-nf9rd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43b8d58c086", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:50.929278 containerd[1466]: 2025-09-12 17:42:50.852 [INFO][4764] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.200/32] ContainerID="20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" Namespace="kube-system" Pod="coredns-674b8bbfcf-nf9rd" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:42:50.929278 containerd[1466]: 2025-09-12 17:42:50.853 [INFO][4764] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43b8d58c086 ContainerID="20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" Namespace="kube-system" Pod="coredns-674b8bbfcf-nf9rd" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:42:50.929278 containerd[1466]: 2025-09-12 17:42:50.875 [INFO][4764] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" Namespace="kube-system" Pod="coredns-674b8bbfcf-nf9rd" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:42:50.929278 containerd[1466]: 2025-09-12 17:42:50.875 [INFO][4764] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" Namespace="kube-system" Pod="coredns-674b8bbfcf-nf9rd" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1d23bfb6-ec70-4170-8790-d653e47690fd", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b", Pod:"coredns-674b8bbfcf-nf9rd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43b8d58c086", MAC:"c6:63:aa:6a:5e:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:50.929278 containerd[1466]: 2025-09-12 17:42:50.892 [INFO][4764] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b" Namespace="kube-system" Pod="coredns-674b8bbfcf-nf9rd" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:42:50.946546 containerd[1466]: time="2025-09-12T17:42:50.946502620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd4c4dbff-4fwwl,Uid:1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf\"" Sep 12 17:42:50.952442 systemd-networkd[1383]: calia5ff398eef7: Gained IPv6LL Sep 12 17:42:50.952893 systemd-networkd[1383]: calif0475a66a71: Gained IPv6LL Sep 12 17:42:50.997273 containerd[1466]: time="2025-09-12T17:42:50.992946284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:51.000283 containerd[1466]: time="2025-09-12T17:42:51.000180887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:51.000417 containerd[1466]: time="2025-09-12T17:42:51.000371136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:51.002377 containerd[1466]: time="2025-09-12T17:42:51.002234232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:51.016732 containerd[1466]: time="2025-09-12T17:42:51.016681768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ncq8h,Uid:07aac094-4524-4cf9-bf29-6a01c3a02371,Namespace:calico-system,Attempt:1,} returns sandbox id \"3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc\"" Sep 12 17:42:51.054961 containerd[1466]: time="2025-09-12T17:42:51.053030098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:51.054961 containerd[1466]: time="2025-09-12T17:42:51.053126815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:51.054961 containerd[1466]: time="2025-09-12T17:42:51.053147479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:51.054961 containerd[1466]: time="2025-09-12T17:42:51.053264392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:51.080337 systemd[1]: Started cri-containerd-965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4.scope - libcontainer container 965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4. Sep 12 17:42:51.126340 systemd[1]: Started cri-containerd-20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b.scope - libcontainer container 20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b. Sep 12 17:42:51.135627 containerd[1466]: time="2025-09-12T17:42:51.135083808Z" level=info msg="StartContainer for \"e15216b8747746dc17b0863aa0f7c4b61b567dccfe37d225a86ade39018339e9\" returns successfully" Sep 12 17:42:51.209797 containerd[1466]: time="2025-09-12T17:42:51.209723885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fdfb6d66f-lczz9,Uid:3ca1b676-84c9-4bc8-ae1d-208eeead353b,Namespace:calico-system,Attempt:1,} returns sandbox id \"bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129\"" Sep 12 17:42:51.267116 containerd[1466]: time="2025-09-12T17:42:51.266054867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8l2m5,Uid:8cf3bd71-c352-4c33-a791-2ad40156deb1,Namespace:kube-system,Attempt:1,} returns sandbox id \"965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4\"" Sep 12 17:42:51.278119 containerd[1466]: time="2025-09-12T17:42:51.276923967Z" level=info msg="CreateContainer within sandbox \"965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:42:51.293705 containerd[1466]: time="2025-09-12T17:42:51.290387945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nf9rd,Uid:1d23bfb6-ec70-4170-8790-d653e47690fd,Namespace:kube-system,Attempt:1,} returns sandbox id \"20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b\"" Sep 12 17:42:51.307178 containerd[1466]: time="2025-09-12T17:42:51.307102355Z" level=info msg="CreateContainer within sandbox \"20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:42:51.318534 containerd[1466]: time="2025-09-12T17:42:51.318412295Z" level=info msg="CreateContainer within sandbox \"965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4d6e4583d40d57f3f1e4288a726c08fe18e8f250124735497fee6f7b418dd7e\"" Sep 12 17:42:51.320206 containerd[1466]: time="2025-09-12T17:42:51.319937368Z" level=info msg="StartContainer for \"f4d6e4583d40d57f3f1e4288a726c08fe18e8f250124735497fee6f7b418dd7e\"" Sep 12 17:42:51.335849 containerd[1466]: time="2025-09-12T17:42:51.335038276Z" level=info msg="CreateContainer within sandbox \"20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"856fb3e448a7d19d6c9cefc9c329fd4663518006b7f95d7a34eb6de630ab2bdc\"" Sep 12 17:42:51.340118 containerd[1466]: time="2025-09-12T17:42:51.339752926Z" level=info msg="StartContainer for \"856fb3e448a7d19d6c9cefc9c329fd4663518006b7f95d7a34eb6de630ab2bdc\"" Sep 12 17:42:51.396335 systemd[1]: Started cri-containerd-f4d6e4583d40d57f3f1e4288a726c08fe18e8f250124735497fee6f7b418dd7e.scope - libcontainer container f4d6e4583d40d57f3f1e4288a726c08fe18e8f250124735497fee6f7b418dd7e. Sep 12 17:42:51.468291 systemd[1]: Started cri-containerd-856fb3e448a7d19d6c9cefc9c329fd4663518006b7f95d7a34eb6de630ab2bdc.scope - libcontainer container 856fb3e448a7d19d6c9cefc9c329fd4663518006b7f95d7a34eb6de630ab2bdc. Sep 12 17:42:51.561879 containerd[1466]: time="2025-09-12T17:42:51.561817831Z" level=info msg="StartContainer for \"f4d6e4583d40d57f3f1e4288a726c08fe18e8f250124735497fee6f7b418dd7e\" returns successfully" Sep 12 17:42:51.601955 containerd[1466]: time="2025-09-12T17:42:51.599191864Z" level=info msg="StartContainer for \"856fb3e448a7d19d6c9cefc9c329fd4663518006b7f95d7a34eb6de630ab2bdc\" returns successfully" Sep 12 17:42:51.659795 systemd-networkd[1383]: cali9836997e06b: Gained IPv6LL Sep 12 17:42:52.041751 kubelet[2632]: I0912 17:42:52.040685 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nf9rd" podStartSLOduration=45.040661735 podStartE2EDuration="45.040661735s" podCreationTimestamp="2025-09-12 17:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:52.015559307 +0000 UTC m=+49.611202427" watchObservedRunningTime="2025-09-12 17:42:52.040661735 +0000 UTC m=+49.636304854" Sep 12 17:42:52.107322 systemd-networkd[1383]: cali43b8d58c086: Gained IPv6LL Sep 12 17:42:52.552885 systemd-networkd[1383]: cali6261262fd0c: Gained IPv6LL Sep 12 17:42:53.036777 kubelet[2632]: I0912 17:42:53.036705 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8l2m5" podStartSLOduration=46.036682241 podStartE2EDuration="46.036682241s" podCreationTimestamp="2025-09-12 17:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:52.045233963 +0000 UTC m=+49.640877070" watchObservedRunningTime="2025-09-12 17:42:53.036682241 +0000 UTC m=+50.632325336" Sep 12 17:42:54.180640 containerd[1466]: time="2025-09-12T17:42:54.180580621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:54.182077 containerd[1466]: time="2025-09-12T17:42:54.181817952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:42:54.183342 containerd[1466]: time="2025-09-12T17:42:54.183284273Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:54.188289 containerd[1466]: time="2025-09-12T17:42:54.188247894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:54.190435 containerd[1466]: time="2025-09-12T17:42:54.189402410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.711588079s" Sep 12 17:42:54.190435 containerd[1466]: time="2025-09-12T17:42:54.189454071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:42:54.195457 containerd[1466]: time="2025-09-12T17:42:54.193974374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:42:54.201782 containerd[1466]: time="2025-09-12T17:42:54.201737910Z" level=info msg="CreateContainer within sandbox \"928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:42:54.223561 containerd[1466]: time="2025-09-12T17:42:54.220456796Z" level=info msg="CreateContainer within sandbox \"928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3195d92c1156f6d38d8f2d703dc5c378cec488080f431c317f4a2bfcc5045e40\"" Sep 12 17:42:54.232133 containerd[1466]: time="2025-09-12T17:42:54.224141169Z" level=info msg="StartContainer for \"3195d92c1156f6d38d8f2d703dc5c378cec488080f431c317f4a2bfcc5045e40\"" Sep 12 17:42:54.306679 systemd[1]: Started cri-containerd-3195d92c1156f6d38d8f2d703dc5c378cec488080f431c317f4a2bfcc5045e40.scope - libcontainer container 3195d92c1156f6d38d8f2d703dc5c378cec488080f431c317f4a2bfcc5045e40. Sep 12 17:42:54.401504 containerd[1466]: time="2025-09-12T17:42:54.401451063Z" level=info msg="StartContainer for \"3195d92c1156f6d38d8f2d703dc5c378cec488080f431c317f4a2bfcc5045e40\" returns successfully" Sep 12 17:42:54.407925 containerd[1466]: time="2025-09-12T17:42:54.406711393Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:54.409640 containerd[1466]: time="2025-09-12T17:42:54.409588677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:42:54.423835 containerd[1466]: time="2025-09-12T17:42:54.423774639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 229.75753ms" Sep 12 17:42:54.424060 containerd[1466]: time="2025-09-12T17:42:54.424033772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:42:54.427779 containerd[1466]: time="2025-09-12T17:42:54.427710296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:42:54.432213 containerd[1466]: time="2025-09-12T17:42:54.432109390Z" level=info msg="CreateContainer within sandbox \"89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:42:54.458633 containerd[1466]: time="2025-09-12T17:42:54.458582833Z" level=info msg="CreateContainer within sandbox \"89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1892754598786dd58ab8aa9004322d093063e56d471b6074754923cbc2d1b2dd\"" Sep 12 17:42:54.459554 containerd[1466]: time="2025-09-12T17:42:54.459519322Z" level=info msg="StartContainer for \"1892754598786dd58ab8aa9004322d093063e56d471b6074754923cbc2d1b2dd\"" Sep 12 17:42:54.523102 systemd[1]: Started cri-containerd-1892754598786dd58ab8aa9004322d093063e56d471b6074754923cbc2d1b2dd.scope - libcontainer container 1892754598786dd58ab8aa9004322d093063e56d471b6074754923cbc2d1b2dd. Sep 12 17:42:54.709911 containerd[1466]: time="2025-09-12T17:42:54.709518239Z" level=info msg="StartContainer for \"1892754598786dd58ab8aa9004322d093063e56d471b6074754923cbc2d1b2dd\" returns successfully" Sep 12 17:42:54.863467 ntpd[1434]: Listen normally on 8 vxlan.calico 192.168.86.192:123 Sep 12 17:42:54.863602 ntpd[1434]: Listen normally on 9 calic2abce5daa0 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:42:54.864141 ntpd[1434]: 12 Sep 17:42:54 ntpd[1434]: Listen normally on 8 vxlan.calico 192.168.86.192:123 Sep 12 17:42:54.864141 ntpd[1434]: 12 Sep 17:42:54 ntpd[1434]: Listen normally on 9 calic2abce5daa0 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:42:54.864141 ntpd[1434]: 12 Sep 17:42:54 ntpd[1434]: Listen normally on 10 vxlan.calico [fe80::641d:58ff:fe2c:714a%5]:123 Sep 12 17:42:54.864141 ntpd[1434]: 12 Sep 17:42:54 ntpd[1434]: Listen normally on 11 cali347dd06e984 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 17:42:54.864141 ntpd[1434]: 12 Sep 17:42:54 ntpd[1434]: Listen normally on 12 cali95c680a31f0 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:42:54.864141 ntpd[1434]: 12 Sep 17:42:54 ntpd[1434]: Listen normally on 13 calia5ff398eef7 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:42:54.864141 ntpd[1434]: 12 Sep 17:42:54 ntpd[1434]: Listen normally on 14 calif0475a66a71 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:42:54.864141 ntpd[1434]: 12 Sep 17:42:54 ntpd[1434]: Listen normally on 15 cali9836997e06b [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:42:54.864141 ntpd[1434]: 12 Sep 17:42:54 ntpd[1434]: Listen normally on 16 cali6261262fd0c [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:42:54.864141 ntpd[1434]: 12 Sep 17:42:54 ntpd[1434]: Listen normally on 17 cali43b8d58c086 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:42:54.863687 ntpd[1434]: Listen normally on 10 vxlan.calico [fe80::641d:58ff:fe2c:714a%5]:123 Sep 12 17:42:54.863751 ntpd[1434]: Listen normally on 11 cali347dd06e984 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 17:42:54.863810 ntpd[1434]: Listen normally on 12 cali95c680a31f0 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:42:54.863869 ntpd[1434]: Listen normally on 13 calia5ff398eef7 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:42:54.863927 ntpd[1434]: Listen normally on 14 calif0475a66a71 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:42:54.863982 ntpd[1434]: Listen normally on 15 cali9836997e06b [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:42:54.864039 ntpd[1434]: Listen normally on 16 cali6261262fd0c [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:42:54.864116 ntpd[1434]: Listen normally on 17 cali43b8d58c086 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:42:55.072727 kubelet[2632]: I0912 17:42:55.072537 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fd4c4dbff-4fwwl" podStartSLOduration=33.604667773 podStartE2EDuration="37.072513703s" podCreationTimestamp="2025-09-12 17:42:18 +0000 UTC" firstStartedPulling="2025-09-12 17:42:50.958524762 +0000 UTC m=+48.554167868" lastFinishedPulling="2025-09-12 17:42:54.426370704 +0000 UTC m=+52.022013798" observedRunningTime="2025-09-12 17:42:55.071571377 +0000 UTC m=+52.667214495" watchObservedRunningTime="2025-09-12 17:42:55.072513703 +0000 UTC m=+52.668156821" Sep 12 17:42:55.072727 kubelet[2632]: I0912 17:42:55.072672 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fd4c4dbff-cp27h" podStartSLOduration=31.220787166 podStartE2EDuration="37.072661979s" podCreationTimestamp="2025-09-12 17:42:18 +0000 UTC" firstStartedPulling="2025-09-12 17:42:48.340939169 +0000 UTC m=+45.936582280" lastFinishedPulling="2025-09-12 17:42:54.192814002 +0000 UTC m=+51.788457093" observedRunningTime="2025-09-12 17:42:55.046853384 +0000 UTC m=+52.642496501" watchObservedRunningTime="2025-09-12 17:42:55.072661979 +0000 UTC m=+52.668305097" Sep 12 17:42:56.024743 kubelet[2632]: I0912 17:42:56.024216 2632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:42:57.028200 kubelet[2632]: I0912 17:42:57.028155 2632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:42:57.804360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount800462125.mount: Deactivated successfully. Sep 12 17:42:59.452936 containerd[1466]: time="2025-09-12T17:42:59.452877570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:59.455991 containerd[1466]: time="2025-09-12T17:42:59.455914490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:42:59.460136 containerd[1466]: time="2025-09-12T17:42:59.457871926Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:59.465343 containerd[1466]: time="2025-09-12T17:42:59.465295798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:59.467344 containerd[1466]: time="2025-09-12T17:42:59.467145668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.039199415s" Sep 12 17:42:59.467344 containerd[1466]: time="2025-09-12T17:42:59.467212720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:42:59.471540 containerd[1466]: time="2025-09-12T17:42:59.471506423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:42:59.476200 containerd[1466]: time="2025-09-12T17:42:59.475978460Z" level=info msg="CreateContainer within sandbox \"3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:42:59.499724 containerd[1466]: time="2025-09-12T17:42:59.499477715Z" level=info msg="CreateContainer within sandbox \"3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2950043688be3b259b598243141caa2588d99c2ec7654396d22599dd1cc52c0e\"" Sep 12 17:42:59.501275 containerd[1466]: time="2025-09-12T17:42:59.501224341Z" level=info msg="StartContainer for \"2950043688be3b259b598243141caa2588d99c2ec7654396d22599dd1cc52c0e\"" Sep 12 17:42:59.572287 systemd[1]: Started cri-containerd-2950043688be3b259b598243141caa2588d99c2ec7654396d22599dd1cc52c0e.scope - libcontainer container 2950043688be3b259b598243141caa2588d99c2ec7654396d22599dd1cc52c0e. Sep 12 17:42:59.671461 containerd[1466]: time="2025-09-12T17:42:59.671405640Z" level=info msg="StartContainer for \"2950043688be3b259b598243141caa2588d99c2ec7654396d22599dd1cc52c0e\" returns successfully" Sep 12 17:43:00.099870 kubelet[2632]: I0912 17:43:00.098799 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-ncq8h" podStartSLOduration=28.651094335 podStartE2EDuration="37.098773016s" podCreationTimestamp="2025-09-12 17:42:23 +0000 UTC" firstStartedPulling="2025-09-12 17:42:51.021693637 +0000 UTC m=+48.617336744" lastFinishedPulling="2025-09-12 17:42:59.469372329 +0000 UTC m=+57.065015425" observedRunningTime="2025-09-12 17:43:00.097931447 +0000 UTC m=+57.693574565" watchObservedRunningTime="2025-09-12 17:43:00.098773016 +0000 UTC m=+57.694416137" Sep 12 17:43:01.216340 containerd[1466]: time="2025-09-12T17:43:01.216288119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:01.220601 containerd[1466]: time="2025-09-12T17:43:01.220519717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:43:01.222134 containerd[1466]: time="2025-09-12T17:43:01.221484037Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:01.228939 containerd[1466]: time="2025-09-12T17:43:01.228636307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:01.231736 containerd[1466]: time="2025-09-12T17:43:01.229840715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.758286956s" Sep 12 17:43:01.232010 containerd[1466]: time="2025-09-12T17:43:01.231982541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:43:01.239603 containerd[1466]: time="2025-09-12T17:43:01.239568407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:43:01.243451 containerd[1466]: time="2025-09-12T17:43:01.243413189Z" level=info msg="CreateContainer within sandbox \"6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:43:01.274636 containerd[1466]: time="2025-09-12T17:43:01.274497139Z" level=info msg="CreateContainer within sandbox \"6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7c9900a60f5665e2b485f09c834db7179f11282b45ba3ead0e5d9bfd7c774348\"" Sep 12 17:43:01.277576 containerd[1466]: time="2025-09-12T17:43:01.277528968Z" level=info msg="StartContainer for \"7c9900a60f5665e2b485f09c834db7179f11282b45ba3ead0e5d9bfd7c774348\"" Sep 12 17:43:01.384321 systemd[1]: Started cri-containerd-7c9900a60f5665e2b485f09c834db7179f11282b45ba3ead0e5d9bfd7c774348.scope - libcontainer container 7c9900a60f5665e2b485f09c834db7179f11282b45ba3ead0e5d9bfd7c774348. Sep 12 17:43:01.725518 containerd[1466]: time="2025-09-12T17:43:01.725276954Z" level=info msg="StartContainer for \"7c9900a60f5665e2b485f09c834db7179f11282b45ba3ead0e5d9bfd7c774348\" returns successfully" Sep 12 17:43:02.078474 kubelet[2632]: I0912 17:43:02.077185 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jcz47" podStartSLOduration=26.08201961 podStartE2EDuration="39.077165288s" podCreationTimestamp="2025-09-12 17:42:23 +0000 UTC" firstStartedPulling="2025-09-12 17:42:48.240803376 +0000 UTC m=+45.836446488" lastFinishedPulling="2025-09-12 17:43:01.235949069 +0000 UTC m=+58.831592166" observedRunningTime="2025-09-12 17:43:02.075034008 +0000 UTC m=+59.670677127" watchObservedRunningTime="2025-09-12 17:43:02.077165288 +0000 UTC m=+59.672808404" Sep 12 17:43:02.576075 containerd[1466]: time="2025-09-12T17:43:02.576009906Z" level=info msg="StopPodSandbox for \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\"" Sep 12 17:43:02.785389 kubelet[2632]: I0912 17:43:02.785351 2632 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:43:02.787756 kubelet[2632]: I0912 17:43:02.787191 2632 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:43:02.816055 containerd[1466]: 2025-09-12 17:43:02.673 [WARNING][5263] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f422465-d46f-4508-ba40-fe8c850f3aa6", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151", Pod:"csi-node-driver-jcz47", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali347dd06e984", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:02.816055 containerd[1466]: 2025-09-12 17:43:02.674 [INFO][5263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Sep 12 17:43:02.816055 containerd[1466]: 2025-09-12 17:43:02.674 [INFO][5263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" iface="eth0" netns="" Sep 12 17:43:02.816055 containerd[1466]: 2025-09-12 17:43:02.674 [INFO][5263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Sep 12 17:43:02.816055 containerd[1466]: 2025-09-12 17:43:02.674 [INFO][5263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Sep 12 17:43:02.816055 containerd[1466]: 2025-09-12 17:43:02.765 [INFO][5272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" HandleID="k8s-pod-network.a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:43:02.816055 containerd[1466]: 2025-09-12 17:43:02.768 [INFO][5272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:02.816055 containerd[1466]: 2025-09-12 17:43:02.768 [INFO][5272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:02.816055 containerd[1466]: 2025-09-12 17:43:02.803 [WARNING][5272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" HandleID="k8s-pod-network.a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:43:02.816055 containerd[1466]: 2025-09-12 17:43:02.804 [INFO][5272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" HandleID="k8s-pod-network.a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:43:02.816055 containerd[1466]: 2025-09-12 17:43:02.809 [INFO][5272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:02.816055 containerd[1466]: 2025-09-12 17:43:02.812 [INFO][5263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Sep 12 17:43:02.816860 containerd[1466]: time="2025-09-12T17:43:02.816184179Z" level=info msg="TearDown network for sandbox \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\" successfully" Sep 12 17:43:02.816860 containerd[1466]: time="2025-09-12T17:43:02.816220859Z" level=info msg="StopPodSandbox for \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\" returns successfully" Sep 12 17:43:02.820117 containerd[1466]: time="2025-09-12T17:43:02.817338832Z" level=info msg="RemovePodSandbox for \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\"" Sep 12 17:43:02.820117 containerd[1466]: time="2025-09-12T17:43:02.817380956Z" level=info msg="Forcibly stopping sandbox \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\"" Sep 12 17:43:03.142749 containerd[1466]: 2025-09-12 17:43:02.965 [WARNING][5287] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f422465-d46f-4508-ba40-fe8c850f3aa6", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"6ca1adc6bac1b7e1a474dd49bb844e40f1aaa00be810a5ecea59ef13efe5b151", Pod:"csi-node-driver-jcz47", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali347dd06e984", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:03.142749 containerd[1466]: 2025-09-12 17:43:02.967 [INFO][5287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Sep 12 17:43:03.142749 containerd[1466]: 2025-09-12 17:43:02.967 [INFO][5287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" iface="eth0" netns="" Sep 12 17:43:03.142749 containerd[1466]: 2025-09-12 17:43:02.967 [INFO][5287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Sep 12 17:43:03.142749 containerd[1466]: 2025-09-12 17:43:02.967 [INFO][5287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Sep 12 17:43:03.142749 containerd[1466]: 2025-09-12 17:43:03.057 [INFO][5294] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" HandleID="k8s-pod-network.a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:43:03.142749 containerd[1466]: 2025-09-12 17:43:03.060 [INFO][5294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:03.142749 containerd[1466]: 2025-09-12 17:43:03.060 [INFO][5294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:03.142749 containerd[1466]: 2025-09-12 17:43:03.116 [WARNING][5294] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" HandleID="k8s-pod-network.a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:43:03.142749 containerd[1466]: 2025-09-12 17:43:03.116 [INFO][5294] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" HandleID="k8s-pod-network.a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-csi--node--driver--jcz47-eth0" Sep 12 17:43:03.142749 containerd[1466]: 2025-09-12 17:43:03.127 [INFO][5294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:03.142749 containerd[1466]: 2025-09-12 17:43:03.135 [INFO][5287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a" Sep 12 17:43:03.143628 containerd[1466]: time="2025-09-12T17:43:03.142793253Z" level=info msg="TearDown network for sandbox \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\" successfully" Sep 12 17:43:03.158098 containerd[1466]: time="2025-09-12T17:43:03.157223317Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:03.158098 containerd[1466]: time="2025-09-12T17:43:03.157326492Z" level=info msg="RemovePodSandbox \"a14899a6b3ba6d41e6397978d628b0ee66151fd4a189515d973cfacdb4e7f61a\" returns successfully" Sep 12 17:43:03.158789 containerd[1466]: time="2025-09-12T17:43:03.158337860Z" level=info msg="StopPodSandbox for \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\"" Sep 12 17:43:03.403526 containerd[1466]: 2025-09-12 17:43:03.270 [WARNING][5311] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0", GenerateName:"calico-kube-controllers-6fdfb6d66f-", Namespace:"calico-system", SelfLink:"", UID:"3ca1b676-84c9-4bc8-ae1d-208eeead353b", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fdfb6d66f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129", Pod:"calico-kube-controllers-6fdfb6d66f-lczz9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9836997e06b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:03.403526 containerd[1466]: 2025-09-12 17:43:03.271 [INFO][5311] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Sep 12 17:43:03.403526 containerd[1466]: 2025-09-12 17:43:03.271 [INFO][5311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" iface="eth0" netns="" Sep 12 17:43:03.403526 containerd[1466]: 2025-09-12 17:43:03.271 [INFO][5311] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Sep 12 17:43:03.403526 containerd[1466]: 2025-09-12 17:43:03.271 [INFO][5311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Sep 12 17:43:03.403526 containerd[1466]: 2025-09-12 17:43:03.358 [INFO][5319] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" HandleID="k8s-pod-network.0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:43:03.403526 containerd[1466]: 2025-09-12 17:43:03.359 [INFO][5319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:03.403526 containerd[1466]: 2025-09-12 17:43:03.359 [INFO][5319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:03.403526 containerd[1466]: 2025-09-12 17:43:03.382 [WARNING][5319] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" HandleID="k8s-pod-network.0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:43:03.403526 containerd[1466]: 2025-09-12 17:43:03.382 [INFO][5319] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" HandleID="k8s-pod-network.0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:43:03.403526 containerd[1466]: 2025-09-12 17:43:03.388 [INFO][5319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:03.403526 containerd[1466]: 2025-09-12 17:43:03.390 [INFO][5311] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Sep 12 17:43:03.403526 containerd[1466]: time="2025-09-12T17:43:03.401507964Z" level=info msg="TearDown network for sandbox \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\" successfully" Sep 12 17:43:03.403526 containerd[1466]: time="2025-09-12T17:43:03.401545344Z" level=info msg="StopPodSandbox for \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\" returns successfully" Sep 12 17:43:03.405610 containerd[1466]: time="2025-09-12T17:43:03.403956047Z" level=info msg="RemovePodSandbox for \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\"" Sep 12 17:43:03.405610 containerd[1466]: time="2025-09-12T17:43:03.404032577Z" level=info msg="Forcibly stopping sandbox \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\"" Sep 12 17:43:03.636406 containerd[1466]: 2025-09-12 17:43:03.529 [WARNING][5333] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0", GenerateName:"calico-kube-controllers-6fdfb6d66f-", Namespace:"calico-system", SelfLink:"", UID:"3ca1b676-84c9-4bc8-ae1d-208eeead353b", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fdfb6d66f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129", Pod:"calico-kube-controllers-6fdfb6d66f-lczz9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9836997e06b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:03.636406 containerd[1466]: 2025-09-12 17:43:03.529 [INFO][5333] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Sep 12 17:43:03.636406 containerd[1466]: 2025-09-12 17:43:03.530 [INFO][5333] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" iface="eth0" netns="" Sep 12 17:43:03.636406 containerd[1466]: 2025-09-12 17:43:03.530 [INFO][5333] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Sep 12 17:43:03.636406 containerd[1466]: 2025-09-12 17:43:03.530 [INFO][5333] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Sep 12 17:43:03.636406 containerd[1466]: 2025-09-12 17:43:03.616 [INFO][5340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" HandleID="k8s-pod-network.0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:43:03.636406 containerd[1466]: 2025-09-12 17:43:03.616 [INFO][5340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:03.636406 containerd[1466]: 2025-09-12 17:43:03.616 [INFO][5340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:03.636406 containerd[1466]: 2025-09-12 17:43:03.627 [WARNING][5340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" HandleID="k8s-pod-network.0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:43:03.636406 containerd[1466]: 2025-09-12 17:43:03.628 [INFO][5340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" HandleID="k8s-pod-network.0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fdfb6d66f--lczz9-eth0" Sep 12 17:43:03.636406 containerd[1466]: 2025-09-12 17:43:03.629 [INFO][5340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:03.636406 containerd[1466]: 2025-09-12 17:43:03.632 [INFO][5333] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a" Sep 12 17:43:03.636406 containerd[1466]: time="2025-09-12T17:43:03.634749436Z" level=info msg="TearDown network for sandbox \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\" successfully" Sep 12 17:43:03.711197 containerd[1466]: time="2025-09-12T17:43:03.709552318Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:03.711197 containerd[1466]: time="2025-09-12T17:43:03.709717439Z" level=info msg="RemovePodSandbox \"0cc1467326b6dc1f97578129f6e644d949d405bbcdf566a401ddeff8a82ef74a\" returns successfully" Sep 12 17:43:03.715629 containerd[1466]: time="2025-09-12T17:43:03.715578502Z" level=info msg="StopPodSandbox for \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\"" Sep 12 17:43:03.947889 containerd[1466]: 2025-09-12 17:43:03.812 [WARNING][5358] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0", GenerateName:"calico-apiserver-fd4c4dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fd4c4dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf", Pod:"calico-apiserver-fd4c4dbff-4fwwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia5ff398eef7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:03.947889 containerd[1466]: 2025-09-12 17:43:03.812 [INFO][5358] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Sep 12 17:43:03.947889 containerd[1466]: 2025-09-12 17:43:03.812 [INFO][5358] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" iface="eth0" netns="" Sep 12 17:43:03.947889 containerd[1466]: 2025-09-12 17:43:03.812 [INFO][5358] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Sep 12 17:43:03.947889 containerd[1466]: 2025-09-12 17:43:03.812 [INFO][5358] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Sep 12 17:43:03.947889 containerd[1466]: 2025-09-12 17:43:03.910 [INFO][5365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" HandleID="k8s-pod-network.0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:43:03.947889 containerd[1466]: 2025-09-12 17:43:03.912 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:03.947889 containerd[1466]: 2025-09-12 17:43:03.912 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:03.947889 containerd[1466]: 2025-09-12 17:43:03.932 [WARNING][5365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" HandleID="k8s-pod-network.0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:43:03.947889 containerd[1466]: 2025-09-12 17:43:03.932 [INFO][5365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" HandleID="k8s-pod-network.0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:43:03.947889 containerd[1466]: 2025-09-12 17:43:03.936 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:03.947889 containerd[1466]: 2025-09-12 17:43:03.941 [INFO][5358] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Sep 12 17:43:03.947889 containerd[1466]: time="2025-09-12T17:43:03.947774945Z" level=info msg="TearDown network for sandbox \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\" successfully" Sep 12 17:43:03.947889 containerd[1466]: time="2025-09-12T17:43:03.947812392Z" level=info msg="StopPodSandbox for \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\" returns successfully" Sep 12 17:43:03.948832 containerd[1466]: time="2025-09-12T17:43:03.948564295Z" level=info msg="RemovePodSandbox for \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\"" Sep 12 17:43:03.948832 containerd[1466]: time="2025-09-12T17:43:03.948604503Z" level=info msg="Forcibly stopping sandbox \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\"" Sep 12 17:43:04.258141 containerd[1466]: 2025-09-12 17:43:04.084 [WARNING][5379] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0", GenerateName:"calico-apiserver-fd4c4dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ba79efb-0e0a-4d76-a5e1-787d8e35bc0d", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fd4c4dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"89a9d8e842435cd85ad678d31c352292db1f16f56c9397aa28d0a833c87f1caf", Pod:"calico-apiserver-fd4c4dbff-4fwwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia5ff398eef7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:04.258141 containerd[1466]: 2025-09-12 17:43:04.086 [INFO][5379] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Sep 12 17:43:04.258141 containerd[1466]: 2025-09-12 17:43:04.086 [INFO][5379] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" iface="eth0" netns="" Sep 12 17:43:04.258141 containerd[1466]: 2025-09-12 17:43:04.086 [INFO][5379] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Sep 12 17:43:04.258141 containerd[1466]: 2025-09-12 17:43:04.086 [INFO][5379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Sep 12 17:43:04.258141 containerd[1466]: 2025-09-12 17:43:04.207 [INFO][5386] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" HandleID="k8s-pod-network.0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:43:04.258141 containerd[1466]: 2025-09-12 17:43:04.209 [INFO][5386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:04.258141 containerd[1466]: 2025-09-12 17:43:04.210 [INFO][5386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:04.258141 containerd[1466]: 2025-09-12 17:43:04.235 [WARNING][5386] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" HandleID="k8s-pod-network.0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:43:04.258141 containerd[1466]: 2025-09-12 17:43:04.235 [INFO][5386] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" HandleID="k8s-pod-network.0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--4fwwl-eth0" Sep 12 17:43:04.258141 containerd[1466]: 2025-09-12 17:43:04.241 [INFO][5386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:04.258141 containerd[1466]: 2025-09-12 17:43:04.251 [INFO][5379] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5" Sep 12 17:43:04.258141 containerd[1466]: time="2025-09-12T17:43:04.253540663Z" level=info msg="TearDown network for sandbox \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\" successfully" Sep 12 17:43:04.265392 containerd[1466]: time="2025-09-12T17:43:04.265159694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:04.265392 containerd[1466]: time="2025-09-12T17:43:04.265268841Z" level=info msg="RemovePodSandbox \"0f47edf363cf197c4d80b577b4dcf719910cce2c69cb1925ab0d901c2f48d9e5\" returns successfully" Sep 12 17:43:04.267235 containerd[1466]: time="2025-09-12T17:43:04.266821238Z" level=info msg="StopPodSandbox for \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\"" Sep 12 17:43:04.677928 containerd[1466]: 2025-09-12 17:43:04.485 [WARNING][5400] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0", GenerateName:"calico-apiserver-fd4c4dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fd4c4dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f", Pod:"calico-apiserver-fd4c4dbff-cp27h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95c680a31f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:04.677928 containerd[1466]: 2025-09-12 17:43:04.487 [INFO][5400] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Sep 12 17:43:04.677928 containerd[1466]: 2025-09-12 17:43:04.487 [INFO][5400] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" iface="eth0" netns="" Sep 12 17:43:04.677928 containerd[1466]: 2025-09-12 17:43:04.487 [INFO][5400] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Sep 12 17:43:04.677928 containerd[1466]: 2025-09-12 17:43:04.487 [INFO][5400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Sep 12 17:43:04.677928 containerd[1466]: 2025-09-12 17:43:04.608 [INFO][5407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" HandleID="k8s-pod-network.f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:43:04.677928 containerd[1466]: 2025-09-12 17:43:04.608 [INFO][5407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:04.677928 containerd[1466]: 2025-09-12 17:43:04.608 [INFO][5407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:04.677928 containerd[1466]: 2025-09-12 17:43:04.660 [WARNING][5407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" HandleID="k8s-pod-network.f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:43:04.677928 containerd[1466]: 2025-09-12 17:43:04.660 [INFO][5407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" HandleID="k8s-pod-network.f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:43:04.677928 containerd[1466]: 2025-09-12 17:43:04.664 [INFO][5407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:04.677928 containerd[1466]: 2025-09-12 17:43:04.670 [INFO][5400] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Sep 12 17:43:04.684338 containerd[1466]: time="2025-09-12T17:43:04.677988555Z" level=info msg="TearDown network for sandbox \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\" successfully" Sep 12 17:43:04.684338 containerd[1466]: time="2025-09-12T17:43:04.678040524Z" level=info msg="StopPodSandbox for \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\" returns successfully" Sep 12 17:43:04.684338 containerd[1466]: time="2025-09-12T17:43:04.681366112Z" level=info msg="RemovePodSandbox for \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\"" Sep 12 17:43:04.684338 containerd[1466]: time="2025-09-12T17:43:04.681416981Z" level=info msg="Forcibly stopping sandbox \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\"" Sep 12 17:43:05.032254 containerd[1466]: 2025-09-12 17:43:04.850 [WARNING][5421] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0", GenerateName:"calico-apiserver-fd4c4dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"183ac9eb-36cd-4016-8ce1-d2ef0ae1a87d", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fd4c4dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"928f5ba88f6405a6696b64177308d7b64b329c56541bd75f93b8d6f97e8c2a5f", Pod:"calico-apiserver-fd4c4dbff-cp27h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95c680a31f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:05.032254 containerd[1466]: 2025-09-12 17:43:04.850 [INFO][5421] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Sep 12 17:43:05.032254 containerd[1466]: 2025-09-12 17:43:04.850 [INFO][5421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" iface="eth0" netns="" Sep 12 17:43:05.032254 containerd[1466]: 2025-09-12 17:43:04.850 [INFO][5421] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Sep 12 17:43:05.032254 containerd[1466]: 2025-09-12 17:43:04.850 [INFO][5421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Sep 12 17:43:05.032254 containerd[1466]: 2025-09-12 17:43:04.982 [INFO][5428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" HandleID="k8s-pod-network.f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:43:05.032254 containerd[1466]: 2025-09-12 17:43:04.985 [INFO][5428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:05.032254 containerd[1466]: 2025-09-12 17:43:04.985 [INFO][5428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:05.032254 containerd[1466]: 2025-09-12 17:43:05.013 [WARNING][5428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" HandleID="k8s-pod-network.f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:43:05.032254 containerd[1466]: 2025-09-12 17:43:05.013 [INFO][5428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" HandleID="k8s-pod-network.f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-calico--apiserver--fd4c4dbff--cp27h-eth0" Sep 12 17:43:05.032254 containerd[1466]: 2025-09-12 17:43:05.017 [INFO][5428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:05.032254 containerd[1466]: 2025-09-12 17:43:05.023 [INFO][5421] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf" Sep 12 17:43:05.032254 containerd[1466]: time="2025-09-12T17:43:05.030999108Z" level=info msg="TearDown network for sandbox \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\" successfully" Sep 12 17:43:05.041479 containerd[1466]: time="2025-09-12T17:43:05.040793905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:05.041479 containerd[1466]: time="2025-09-12T17:43:05.040912225Z" level=info msg="RemovePodSandbox \"f49c849dd5bec43d52ae49b7fb785b8ee523ca1d9caeb6deb0b05b815c9d5caf\" returns successfully" Sep 12 17:43:05.043111 containerd[1466]: time="2025-09-12T17:43:05.043034526Z" level=info msg="StopPodSandbox for \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\"" Sep 12 17:43:05.405153 containerd[1466]: 2025-09-12 17:43:05.211 [WARNING][5443] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1d23bfb6-ec70-4170-8790-d653e47690fd", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b", Pod:"coredns-674b8bbfcf-nf9rd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43b8d58c086", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:05.405153 containerd[1466]: 2025-09-12 17:43:05.216 [INFO][5443] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Sep 12 17:43:05.405153 containerd[1466]: 2025-09-12 17:43:05.216 [INFO][5443] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" iface="eth0" netns="" Sep 12 17:43:05.405153 containerd[1466]: 2025-09-12 17:43:05.216 [INFO][5443] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Sep 12 17:43:05.405153 containerd[1466]: 2025-09-12 17:43:05.216 [INFO][5443] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Sep 12 17:43:05.405153 containerd[1466]: 2025-09-12 17:43:05.366 [INFO][5454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" HandleID="k8s-pod-network.6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:43:05.405153 containerd[1466]: 2025-09-12 17:43:05.366 [INFO][5454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:05.405153 containerd[1466]: 2025-09-12 17:43:05.367 [INFO][5454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:05.405153 containerd[1466]: 2025-09-12 17:43:05.391 [WARNING][5454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" HandleID="k8s-pod-network.6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:43:05.405153 containerd[1466]: 2025-09-12 17:43:05.392 [INFO][5454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" HandleID="k8s-pod-network.6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:43:05.405153 containerd[1466]: 2025-09-12 17:43:05.395 [INFO][5454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:05.405153 containerd[1466]: 2025-09-12 17:43:05.400 [INFO][5443] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Sep 12 17:43:05.409117 containerd[1466]: time="2025-09-12T17:43:05.408192486Z" level=info msg="TearDown network for sandbox \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\" successfully" Sep 12 17:43:05.409117 containerd[1466]: time="2025-09-12T17:43:05.408235919Z" level=info msg="StopPodSandbox for \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\" returns successfully" Sep 12 17:43:05.409117 containerd[1466]: time="2025-09-12T17:43:05.408830459Z" level=info msg="RemovePodSandbox for \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\"" Sep 12 17:43:05.409117 containerd[1466]: time="2025-09-12T17:43:05.408875986Z" level=info msg="Forcibly stopping sandbox \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\"" Sep 12 17:43:05.728156 containerd[1466]: 2025-09-12 17:43:05.588 [WARNING][5469] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1d23bfb6-ec70-4170-8790-d653e47690fd", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"20176dadf17217ea2531496dac4419c56fd478b0f8ed5ed9ec5c675ade0b123b", Pod:"coredns-674b8bbfcf-nf9rd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43b8d58c086", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:05.728156 containerd[1466]: 2025-09-12 17:43:05.589 [INFO][5469] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Sep 12 17:43:05.728156 containerd[1466]: 2025-09-12 17:43:05.589 [INFO][5469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" iface="eth0" netns="" Sep 12 17:43:05.728156 containerd[1466]: 2025-09-12 17:43:05.589 [INFO][5469] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Sep 12 17:43:05.728156 containerd[1466]: 2025-09-12 17:43:05.589 [INFO][5469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Sep 12 17:43:05.728156 containerd[1466]: 2025-09-12 17:43:05.686 [INFO][5476] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" HandleID="k8s-pod-network.6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:43:05.728156 containerd[1466]: 2025-09-12 17:43:05.692 [INFO][5476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:05.728156 containerd[1466]: 2025-09-12 17:43:05.692 [INFO][5476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:05.728156 containerd[1466]: 2025-09-12 17:43:05.709 [WARNING][5476] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" HandleID="k8s-pod-network.6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:43:05.728156 containerd[1466]: 2025-09-12 17:43:05.709 [INFO][5476] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" HandleID="k8s-pod-network.6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nf9rd-eth0" Sep 12 17:43:05.728156 containerd[1466]: 2025-09-12 17:43:05.713 [INFO][5476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:05.728156 containerd[1466]: 2025-09-12 17:43:05.717 [INFO][5469] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995" Sep 12 17:43:05.728156 containerd[1466]: time="2025-09-12T17:43:05.727018506Z" level=info msg="TearDown network for sandbox \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\" successfully" Sep 12 17:43:05.735422 containerd[1466]: time="2025-09-12T17:43:05.735013319Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:05.735422 containerd[1466]: time="2025-09-12T17:43:05.735180301Z" level=info msg="RemovePodSandbox \"6aa947c7ff37c1e316c759570f6d60c449b9a2ac24e1ab8b37bed5692b811995\" returns successfully" Sep 12 17:43:05.736474 containerd[1466]: time="2025-09-12T17:43:05.735840106Z" level=info msg="StopPodSandbox for \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\"" Sep 12 17:43:06.039457 containerd[1466]: 2025-09-12 17:43:05.891 [WARNING][5491] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8cf3bd71-c352-4c33-a791-2ad40156deb1", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4", Pod:"coredns-674b8bbfcf-8l2m5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6261262fd0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:06.039457 containerd[1466]: 2025-09-12 17:43:05.891 [INFO][5491] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Sep 12 17:43:06.039457 containerd[1466]: 2025-09-12 17:43:05.891 [INFO][5491] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" iface="eth0" netns="" Sep 12 17:43:06.039457 containerd[1466]: 2025-09-12 17:43:05.891 [INFO][5491] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Sep 12 17:43:06.039457 containerd[1466]: 2025-09-12 17:43:05.891 [INFO][5491] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Sep 12 17:43:06.039457 containerd[1466]: 2025-09-12 17:43:05.999 [INFO][5498] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" HandleID="k8s-pod-network.2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:43:06.039457 containerd[1466]: 2025-09-12 17:43:05.999 [INFO][5498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:06.039457 containerd[1466]: 2025-09-12 17:43:05.999 [INFO][5498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:06.039457 containerd[1466]: 2025-09-12 17:43:06.020 [WARNING][5498] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" HandleID="k8s-pod-network.2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:43:06.039457 containerd[1466]: 2025-09-12 17:43:06.020 [INFO][5498] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" HandleID="k8s-pod-network.2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:43:06.039457 containerd[1466]: 2025-09-12 17:43:06.024 [INFO][5498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:06.039457 containerd[1466]: 2025-09-12 17:43:06.034 [INFO][5491] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Sep 12 17:43:06.042074 containerd[1466]: time="2025-09-12T17:43:06.041138362Z" level=info msg="TearDown network for sandbox \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\" successfully" Sep 12 17:43:06.042074 containerd[1466]: time="2025-09-12T17:43:06.041184448Z" level=info msg="StopPodSandbox for \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\" returns successfully" Sep 12 17:43:06.043772 containerd[1466]: time="2025-09-12T17:43:06.043121193Z" level=info msg="RemovePodSandbox for \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\"" Sep 12 17:43:06.044229 containerd[1466]: time="2025-09-12T17:43:06.044073373Z" level=info msg="Forcibly stopping sandbox \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\"" Sep 12 17:43:06.336209 containerd[1466]: 2025-09-12 17:43:06.204 [WARNING][5512] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8cf3bd71-c352-4c33-a791-2ad40156deb1", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"965bfb6a2781729265b5a78706aff272503487662cdaf989f735134bdfa101c4", Pod:"coredns-674b8bbfcf-8l2m5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6261262fd0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:06.336209 containerd[1466]: 2025-09-12 17:43:06.205 [INFO][5512] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Sep 12 17:43:06.336209 containerd[1466]: 2025-09-12 17:43:06.205 [INFO][5512] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" iface="eth0" netns="" Sep 12 17:43:06.336209 containerd[1466]: 2025-09-12 17:43:06.207 [INFO][5512] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Sep 12 17:43:06.336209 containerd[1466]: 2025-09-12 17:43:06.207 [INFO][5512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Sep 12 17:43:06.336209 containerd[1466]: 2025-09-12 17:43:06.285 [INFO][5520] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" HandleID="k8s-pod-network.2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:43:06.336209 containerd[1466]: 2025-09-12 17:43:06.285 [INFO][5520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:06.336209 containerd[1466]: 2025-09-12 17:43:06.286 [INFO][5520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:06.336209 containerd[1466]: 2025-09-12 17:43:06.309 [WARNING][5520] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" HandleID="k8s-pod-network.2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:43:06.336209 containerd[1466]: 2025-09-12 17:43:06.309 [INFO][5520] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" HandleID="k8s-pod-network.2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--8l2m5-eth0" Sep 12 17:43:06.336209 containerd[1466]: 2025-09-12 17:43:06.323 [INFO][5520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:06.336209 containerd[1466]: 2025-09-12 17:43:06.327 [INFO][5512] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057" Sep 12 17:43:06.336209 containerd[1466]: time="2025-09-12T17:43:06.334382179Z" level=info msg="TearDown network for sandbox \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\" successfully" Sep 12 17:43:06.346584 containerd[1466]: time="2025-09-12T17:43:06.346535399Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:06.346722 containerd[1466]: time="2025-09-12T17:43:06.346637325Z" level=info msg="RemovePodSandbox \"2f981cfa88f5fd1e498953d25aabd3f97b653a1ded541d37147441ddb7bbc057\" returns successfully" Sep 12 17:43:06.352499 containerd[1466]: time="2025-09-12T17:43:06.352466984Z" level=info msg="StopPodSandbox for \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\"" Sep 12 17:43:06.609639 containerd[1466]: 2025-09-12 17:43:06.502 [WARNING][5535] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"07aac094-4524-4cf9-bf29-6a01c3a02371", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc", Pod:"goldmane-54d579b49d-ncq8h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.86.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0475a66a71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:06.609639 containerd[1466]: 2025-09-12 17:43:06.502 [INFO][5535] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Sep 12 17:43:06.609639 containerd[1466]: 2025-09-12 17:43:06.503 [INFO][5535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" iface="eth0" netns="" Sep 12 17:43:06.609639 containerd[1466]: 2025-09-12 17:43:06.503 [INFO][5535] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Sep 12 17:43:06.609639 containerd[1466]: 2025-09-12 17:43:06.503 [INFO][5535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Sep 12 17:43:06.609639 containerd[1466]: 2025-09-12 17:43:06.575 [INFO][5543] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" HandleID="k8s-pod-network.5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:43:06.609639 containerd[1466]: 2025-09-12 17:43:06.576 [INFO][5543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:06.609639 containerd[1466]: 2025-09-12 17:43:06.577 [INFO][5543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:06.609639 containerd[1466]: 2025-09-12 17:43:06.591 [WARNING][5543] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" HandleID="k8s-pod-network.5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:43:06.609639 containerd[1466]: 2025-09-12 17:43:06.591 [INFO][5543] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" HandleID="k8s-pod-network.5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:43:06.609639 containerd[1466]: 2025-09-12 17:43:06.595 [INFO][5543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:06.609639 containerd[1466]: 2025-09-12 17:43:06.599 [INFO][5535] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Sep 12 17:43:06.612631 containerd[1466]: time="2025-09-12T17:43:06.608824393Z" level=info msg="TearDown network for sandbox \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\" successfully" Sep 12 17:43:06.612631 containerd[1466]: time="2025-09-12T17:43:06.610755404Z" level=info msg="StopPodSandbox for \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\" returns successfully" Sep 12 17:43:06.613209 containerd[1466]: time="2025-09-12T17:43:06.612636471Z" level=info msg="RemovePodSandbox for \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\"" Sep 12 17:43:06.613209 containerd[1466]: time="2025-09-12T17:43:06.612691441Z" level=info msg="Forcibly stopping sandbox \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\"" Sep 12 17:43:06.938543 containerd[1466]: 2025-09-12 17:43:06.770 [WARNING][5557] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"07aac094-4524-4cf9-bf29-6a01c3a02371", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-160f5498fdb19bb62183.c.flatcar-212911.internal", ContainerID:"3b3d537ce13fbf60d9892cc5c4fd05d74cea2336ed025b50471048d9d8302fbc", Pod:"goldmane-54d579b49d-ncq8h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.86.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0475a66a71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:06.938543 containerd[1466]: 2025-09-12 17:43:06.771 [INFO][5557] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Sep 12 17:43:06.938543 containerd[1466]: 2025-09-12 17:43:06.771 [INFO][5557] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" iface="eth0" netns="" Sep 12 17:43:06.938543 containerd[1466]: 2025-09-12 17:43:06.771 [INFO][5557] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Sep 12 17:43:06.938543 containerd[1466]: 2025-09-12 17:43:06.771 [INFO][5557] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Sep 12 17:43:06.938543 containerd[1466]: 2025-09-12 17:43:06.888 [INFO][5564] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" HandleID="k8s-pod-network.5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:43:06.938543 containerd[1466]: 2025-09-12 17:43:06.890 [INFO][5564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:06.938543 containerd[1466]: 2025-09-12 17:43:06.890 [INFO][5564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:06.938543 containerd[1466]: 2025-09-12 17:43:06.920 [WARNING][5564] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" HandleID="k8s-pod-network.5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:43:06.938543 containerd[1466]: 2025-09-12 17:43:06.920 [INFO][5564] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" HandleID="k8s-pod-network.5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-goldmane--54d579b49d--ncq8h-eth0" Sep 12 17:43:06.938543 containerd[1466]: 2025-09-12 17:43:06.924 [INFO][5564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:06.938543 containerd[1466]: 2025-09-12 17:43:06.932 [INFO][5557] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654" Sep 12 17:43:06.941721 containerd[1466]: time="2025-09-12T17:43:06.939376921Z" level=info msg="TearDown network for sandbox \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\" successfully" Sep 12 17:43:06.950601 containerd[1466]: time="2025-09-12T17:43:06.950261085Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:06.950601 containerd[1466]: time="2025-09-12T17:43:06.950370477Z" level=info msg="RemovePodSandbox \"5a66f6848e8a5a8f1afd79b88f776f9594dad2c8873c77f6fd73f52dcf0c9654\" returns successfully" Sep 12 17:43:06.953075 containerd[1466]: time="2025-09-12T17:43:06.952743019Z" level=info msg="StopPodSandbox for \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\"" Sep 12 17:43:07.191736 containerd[1466]: time="2025-09-12T17:43:07.190968808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:07.194077 containerd[1466]: time="2025-09-12T17:43:07.193857286Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:07.194500 containerd[1466]: time="2025-09-12T17:43:07.193066722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:43:07.194500 containerd[1466]: 2025-09-12 17:43:07.077 [WARNING][5578] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--7896b6f685--hz8pp-eth0" Sep 12 17:43:07.194500 containerd[1466]: 2025-09-12 17:43:07.078 [INFO][5578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Sep 12 17:43:07.194500 containerd[1466]: 2025-09-12 17:43:07.078 [INFO][5578] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" iface="eth0" netns="" Sep 12 17:43:07.194500 containerd[1466]: 2025-09-12 17:43:07.079 [INFO][5578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Sep 12 17:43:07.194500 containerd[1466]: 2025-09-12 17:43:07.079 [INFO][5578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Sep 12 17:43:07.194500 containerd[1466]: 2025-09-12 17:43:07.163 [INFO][5585] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" HandleID="k8s-pod-network.0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--7896b6f685--hz8pp-eth0" Sep 12 17:43:07.194500 containerd[1466]: 2025-09-12 17:43:07.163 [INFO][5585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:07.194500 containerd[1466]: 2025-09-12 17:43:07.164 [INFO][5585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:07.194500 containerd[1466]: 2025-09-12 17:43:07.178 [WARNING][5585] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" HandleID="k8s-pod-network.0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--7896b6f685--hz8pp-eth0" Sep 12 17:43:07.194500 containerd[1466]: 2025-09-12 17:43:07.178 [INFO][5585] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" HandleID="k8s-pod-network.0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--7896b6f685--hz8pp-eth0" Sep 12 17:43:07.194500 containerd[1466]: 2025-09-12 17:43:07.185 [INFO][5585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:07.194500 containerd[1466]: 2025-09-12 17:43:07.188 [INFO][5578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Sep 12 17:43:07.194500 containerd[1466]: time="2025-09-12T17:43:07.194375805Z" level=info msg="TearDown network for sandbox \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\" successfully" Sep 12 17:43:07.194500 containerd[1466]: time="2025-09-12T17:43:07.194396681Z" level=info msg="StopPodSandbox for \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\" returns successfully" Sep 12 17:43:07.197209 containerd[1466]: time="2025-09-12T17:43:07.196679405Z" level=info msg="RemovePodSandbox for \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\"" Sep 12 17:43:07.197209 containerd[1466]: time="2025-09-12T17:43:07.196720761Z" level=info msg="Forcibly stopping sandbox \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\"" Sep 12 17:43:07.200732 containerd[1466]: time="2025-09-12T17:43:07.200695287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:07.203184 containerd[1466]: time="2025-09-12T17:43:07.203013299Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.963398376s" Sep 12 17:43:07.203184 containerd[1466]: time="2025-09-12T17:43:07.203058649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:43:07.247101 containerd[1466]: time="2025-09-12T17:43:07.246714833Z" level=info msg="CreateContainer within sandbox \"bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:43:07.299926 containerd[1466]: time="2025-09-12T17:43:07.299620425Z" level=info msg="CreateContainer within sandbox \"bac60876bed9b7a0a5d75243c2678f020c589bc6485f567664df81dc058fb129\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7dd9cb3c9a0a3c4baf06ff6570777843d4b1f927fa0601cb368c1aba9185ad80\"" Sep 12 17:43:07.300291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2086406940.mount: Deactivated successfully. Sep 12 17:43:07.306573 containerd[1466]: time="2025-09-12T17:43:07.305918083Z" level=info msg="StartContainer for \"7dd9cb3c9a0a3c4baf06ff6570777843d4b1f927fa0601cb368c1aba9185ad80\"" Sep 12 17:43:07.435364 systemd[1]: Started cri-containerd-7dd9cb3c9a0a3c4baf06ff6570777843d4b1f927fa0601cb368c1aba9185ad80.scope - libcontainer container 7dd9cb3c9a0a3c4baf06ff6570777843d4b1f927fa0601cb368c1aba9185ad80. Sep 12 17:43:07.443738 containerd[1466]: 2025-09-12 17:43:07.333 [WARNING][5601] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" WorkloadEndpoint="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--7896b6f685--hz8pp-eth0" Sep 12 17:43:07.443738 containerd[1466]: 2025-09-12 17:43:07.335 [INFO][5601] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Sep 12 17:43:07.443738 containerd[1466]: 2025-09-12 17:43:07.335 [INFO][5601] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" iface="eth0" netns="" Sep 12 17:43:07.443738 containerd[1466]: 2025-09-12 17:43:07.335 [INFO][5601] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Sep 12 17:43:07.443738 containerd[1466]: 2025-09-12 17:43:07.335 [INFO][5601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Sep 12 17:43:07.443738 containerd[1466]: 2025-09-12 17:43:07.409 [INFO][5617] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" HandleID="k8s-pod-network.0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--7896b6f685--hz8pp-eth0" Sep 12 17:43:07.443738 containerd[1466]: 2025-09-12 17:43:07.409 [INFO][5617] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:07.443738 containerd[1466]: 2025-09-12 17:43:07.409 [INFO][5617] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:07.443738 containerd[1466]: 2025-09-12 17:43:07.428 [WARNING][5617] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" HandleID="k8s-pod-network.0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--7896b6f685--hz8pp-eth0" Sep 12 17:43:07.443738 containerd[1466]: 2025-09-12 17:43:07.428 [INFO][5617] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" HandleID="k8s-pod-network.0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Workload="ci--4081--3--6--160f5498fdb19bb62183.c.flatcar--212911.internal-k8s-whisker--7896b6f685--hz8pp-eth0" Sep 12 17:43:07.443738 containerd[1466]: 2025-09-12 17:43:07.433 [INFO][5617] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:07.443738 containerd[1466]: 2025-09-12 17:43:07.441 [INFO][5601] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1" Sep 12 17:43:07.446941 containerd[1466]: time="2025-09-12T17:43:07.445282610Z" level=info msg="TearDown network for sandbox \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\" successfully" Sep 12 17:43:07.453471 containerd[1466]: time="2025-09-12T17:43:07.453416697Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:07.453824 containerd[1466]: time="2025-09-12T17:43:07.453698750Z" level=info msg="RemovePodSandbox \"0cfee0cd28129618c1713c7b2f6464570a24c1af2e6ec074af81492a36ab0cb1\" returns successfully" Sep 12 17:43:07.610426 containerd[1466]: time="2025-09-12T17:43:07.610304035Z" level=info msg="StartContainer for \"7dd9cb3c9a0a3c4baf06ff6570777843d4b1f927fa0601cb368c1aba9185ad80\" returns successfully" Sep 12 17:43:08.177956 kubelet[2632]: I0912 17:43:08.177879 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fdfb6d66f-lczz9" podStartSLOduration=29.186032443 podStartE2EDuration="45.177857398s" podCreationTimestamp="2025-09-12 17:42:23 +0000 UTC" firstStartedPulling="2025-09-12 17:42:51.213138535 +0000 UTC m=+48.808781636" lastFinishedPulling="2025-09-12 17:43:07.204963496 +0000 UTC m=+64.800606591" observedRunningTime="2025-09-12 17:43:08.175646309 +0000 UTC m=+65.771289427" watchObservedRunningTime="2025-09-12 17:43:08.177857398 +0000 UTC m=+65.773500515" Sep 12 17:43:13.314514 systemd[1]: Started sshd@9-10.128.0.94:22-139.178.89.65:56470.service - OpenSSH per-connection server daemon (139.178.89.65:56470). Sep 12 17:43:13.719208 sshd[5690]: Accepted publickey for core from 139.178.89.65 port 56470 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:43:13.721587 sshd[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:13.734933 systemd-logind[1445]: New session 10 of user core. Sep 12 17:43:13.739610 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:43:14.131433 sshd[5690]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:14.139203 systemd[1]: sshd@9-10.128.0.94:22-139.178.89.65:56470.service: Deactivated successfully. Sep 12 17:43:14.144901 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:43:14.148040 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:43:14.152518 systemd-logind[1445]: Removed session 10. Sep 12 17:43:19.199203 systemd[1]: Started sshd@10-10.128.0.94:22-139.178.89.65:56478.service - OpenSSH per-connection server daemon (139.178.89.65:56478). Sep 12 17:43:19.566221 sshd[5729]: Accepted publickey for core from 139.178.89.65 port 56478 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:43:19.568477 sshd[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:19.575516 systemd-logind[1445]: New session 11 of user core. Sep 12 17:43:19.581327 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:43:20.006390 sshd[5729]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:20.016655 systemd[1]: sshd@10-10.128.0.94:22-139.178.89.65:56478.service: Deactivated successfully. Sep 12 17:43:20.022082 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:43:20.025410 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:43:20.028439 systemd-logind[1445]: Removed session 11. Sep 12 17:43:23.489166 kubelet[2632]: I0912 17:43:23.489115 2632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:43:25.079475 systemd[1]: Started sshd@11-10.128.0.94:22-139.178.89.65:37940.service - OpenSSH per-connection server daemon (139.178.89.65:37940). Sep 12 17:43:25.484104 sshd[5752]: Accepted publickey for core from 139.178.89.65 port 37940 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:43:25.488926 sshd[5752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:25.497411 systemd-logind[1445]: New session 12 of user core. Sep 12 17:43:25.504819 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:43:25.926422 sshd[5752]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:25.937637 systemd[1]: sshd@11-10.128.0.94:22-139.178.89.65:37940.service: Deactivated successfully. Sep 12 17:43:25.943285 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:43:25.947297 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:43:25.950188 systemd-logind[1445]: Removed session 12. Sep 12 17:43:26.003447 systemd[1]: Started sshd@12-10.128.0.94:22-139.178.89.65:37946.service - OpenSSH per-connection server daemon (139.178.89.65:37946). Sep 12 17:43:26.389823 sshd[5766]: Accepted publickey for core from 139.178.89.65 port 37946 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:43:26.391653 sshd[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:26.398411 systemd-logind[1445]: New session 13 of user core. Sep 12 17:43:26.405281 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:43:26.816657 sshd[5766]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:26.821397 systemd[1]: sshd@12-10.128.0.94:22-139.178.89.65:37946.service: Deactivated successfully. Sep 12 17:43:26.825128 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:43:26.827177 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:43:26.828890 systemd-logind[1445]: Removed session 13. Sep 12 17:43:26.898318 systemd[1]: Started sshd@13-10.128.0.94:22-139.178.89.65:37956.service - OpenSSH per-connection server daemon (139.178.89.65:37956). Sep 12 17:43:27.293741 sshd[5777]: Accepted publickey for core from 139.178.89.65 port 37956 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:43:27.295616 sshd[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:27.306395 systemd-logind[1445]: New session 14 of user core. Sep 12 17:43:27.314625 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:43:27.684512 sshd[5777]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:27.694983 systemd[1]: sshd@13-10.128.0.94:22-139.178.89.65:37956.service: Deactivated successfully. Sep 12 17:43:27.700307 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:43:27.702425 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:43:27.705307 systemd-logind[1445]: Removed session 14. Sep 12 17:43:32.764233 systemd[1]: Started sshd@14-10.128.0.94:22-139.178.89.65:50328.service - OpenSSH per-connection server daemon (139.178.89.65:50328). Sep 12 17:43:33.158076 sshd[5812]: Accepted publickey for core from 139.178.89.65 port 50328 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:43:33.162368 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:33.175312 systemd-logind[1445]: New session 15 of user core. Sep 12 17:43:33.183330 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:43:33.568391 sshd[5812]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:33.577222 systemd[1]: sshd@14-10.128.0.94:22-139.178.89.65:50328.service: Deactivated successfully. Sep 12 17:43:33.577680 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:43:33.584472 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:43:33.592306 systemd-logind[1445]: Removed session 15. Sep 12 17:43:38.206906 systemd[1]: run-containerd-runc-k8s.io-7dd9cb3c9a0a3c4baf06ff6570777843d4b1f927fa0601cb368c1aba9185ad80-runc.g5tmOd.mount: Deactivated successfully. Sep 12 17:43:38.645536 systemd[1]: Started sshd@15-10.128.0.94:22-139.178.89.65:50344.service - OpenSSH per-connection server daemon (139.178.89.65:50344). Sep 12 17:43:39.045220 sshd[5852]: Accepted publickey for core from 139.178.89.65 port 50344 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:43:39.047403 sshd[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:39.058284 systemd-logind[1445]: New session 16 of user core. Sep 12 17:43:39.066322 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:43:39.449484 sshd[5852]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:39.457943 systemd[1]: sshd@15-10.128.0.94:22-139.178.89.65:50344.service: Deactivated successfully. Sep 12 17:43:39.458197 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:43:39.464239 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:43:39.468274 systemd-logind[1445]: Removed session 16. Sep 12 17:43:44.528227 systemd[1]: Started sshd@16-10.128.0.94:22-139.178.89.65:51860.service - OpenSSH per-connection server daemon (139.178.89.65:51860). Sep 12 17:43:44.930660 sshd[5886]: Accepted publickey for core from 139.178.89.65 port 51860 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:43:44.933122 sshd[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:44.944900 systemd-logind[1445]: New session 17 of user core. Sep 12 17:43:44.949622 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:43:45.373459 sshd[5886]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:45.381886 systemd[1]: sshd@16-10.128.0.94:22-139.178.89.65:51860.service: Deactivated successfully. Sep 12 17:43:45.385503 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:43:45.389083 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:43:45.391886 systemd-logind[1445]: Removed session 17. Sep 12 17:43:50.441223 systemd[1]: Started sshd@17-10.128.0.94:22-139.178.89.65:58918.service - OpenSSH per-connection server daemon (139.178.89.65:58918). Sep 12 17:43:50.818295 sshd[5921]: Accepted publickey for core from 139.178.89.65 port 58918 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:43:50.819209 sshd[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:50.831137 systemd-logind[1445]: New session 18 of user core. Sep 12 17:43:50.839290 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:43:51.196462 sshd[5921]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:51.203228 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:43:51.203762 systemd[1]: sshd@17-10.128.0.94:22-139.178.89.65:58918.service: Deactivated successfully. Sep 12 17:43:51.210281 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:43:51.216768 systemd-logind[1445]: Removed session 18. Sep 12 17:43:51.269267 systemd[1]: Started sshd@18-10.128.0.94:22-139.178.89.65:58922.service - OpenSSH per-connection server daemon (139.178.89.65:58922). Sep 12 17:43:51.646202 sshd[5934]: Accepted publickey for core from 139.178.89.65 port 58922 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:43:51.649055 sshd[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:51.662167 systemd-logind[1445]: New session 19 of user core. Sep 12 17:43:51.665303 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:43:52.083481 sshd[5934]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:52.090969 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:43:52.092241 systemd[1]: sshd@18-10.128.0.94:22-139.178.89.65:58922.service: Deactivated successfully. Sep 12 17:43:52.097628 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:43:52.101966 systemd-logind[1445]: Removed session 19. Sep 12 17:43:52.161505 systemd[1]: Started sshd@19-10.128.0.94:22-139.178.89.65:58924.service - OpenSSH per-connection server daemon (139.178.89.65:58924). Sep 12 17:43:52.557655 sshd[5945]: Accepted publickey for core from 139.178.89.65 port 58924 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:43:52.558762 sshd[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:52.569717 systemd-logind[1445]: New session 20 of user core. Sep 12 17:43:52.579296 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:43:53.811715 sshd[5945]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:53.821740 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:43:53.824207 systemd[1]: sshd@19-10.128.0.94:22-139.178.89.65:58924.service: Deactivated successfully. Sep 12 17:43:53.829560 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:43:53.832198 systemd-logind[1445]: Removed session 20. Sep 12 17:43:53.878853 systemd[1]: Started sshd@20-10.128.0.94:22-139.178.89.65:58940.service - OpenSSH per-connection server daemon (139.178.89.65:58940). Sep 12 17:43:54.274208 sshd[5963]: Accepted publickey for core from 139.178.89.65 port 58940 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:43:54.275930 sshd[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:54.287408 systemd-logind[1445]: New session 21 of user core. Sep 12 17:43:54.292813 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:43:54.942203 sshd[5963]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:54.949502 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:43:54.950551 systemd[1]: sshd@20-10.128.0.94:22-139.178.89.65:58940.service: Deactivated successfully. Sep 12 17:43:54.956563 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:43:54.961313 systemd-logind[1445]: Removed session 21. Sep 12 17:43:55.026258 systemd[1]: Started sshd@21-10.128.0.94:22-139.178.89.65:58942.service - OpenSSH per-connection server daemon (139.178.89.65:58942). Sep 12 17:43:55.416553 sshd[5976]: Accepted publickey for core from 139.178.89.65 port 58942 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:43:55.420206 sshd[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:55.433181 systemd-logind[1445]: New session 22 of user core. Sep 12 17:43:55.437359 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:43:55.813412 sshd[5976]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:55.820776 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:43:55.822153 systemd[1]: sshd@21-10.128.0.94:22-139.178.89.65:58942.service: Deactivated successfully. Sep 12 17:43:55.827630 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:43:55.832866 systemd-logind[1445]: Removed session 22. Sep 12 17:44:00.891229 systemd[1]: Started sshd@22-10.128.0.94:22-139.178.89.65:38296.service - OpenSSH per-connection server daemon (139.178.89.65:38296). Sep 12 17:44:01.289515 sshd[6013]: Accepted publickey for core from 139.178.89.65 port 38296 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:44:01.293689 sshd[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:01.303674 systemd-logind[1445]: New session 23 of user core. Sep 12 17:44:01.308318 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:44:01.686650 sshd[6013]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:01.693910 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:44:01.696123 systemd[1]: sshd@22-10.128.0.94:22-139.178.89.65:38296.service: Deactivated successfully. Sep 12 17:44:01.703570 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:44:01.708423 systemd-logind[1445]: Removed session 23. Sep 12 17:44:02.240490 systemd[1]: run-containerd-runc-k8s.io-7dd9cb3c9a0a3c4baf06ff6570777843d4b1f927fa0601cb368c1aba9185ad80-runc.4NkpTJ.mount: Deactivated successfully. Sep 12 17:44:06.771229 systemd[1]: Started sshd@23-10.128.0.94:22-139.178.89.65:38312.service - OpenSSH per-connection server daemon (139.178.89.65:38312). Sep 12 17:44:07.172232 sshd[6052]: Accepted publickey for core from 139.178.89.65 port 38312 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:44:07.174718 sshd[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:07.185684 systemd-logind[1445]: New session 24 of user core. Sep 12 17:44:07.189327 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:44:07.639441 sshd[6052]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:07.648530 systemd[1]: sshd@23-10.128.0.94:22-139.178.89.65:38312.service: Deactivated successfully. Sep 12 17:44:07.652774 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:44:07.656482 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:44:07.658449 systemd-logind[1445]: Removed session 24.