Sep 12 17:32:43.145129 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:32:43.145197 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:32:43.145217 kernel: BIOS-provided physical RAM map: Sep 12 17:32:43.145232 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 12 17:32:43.145246 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 12 17:32:43.145260 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 12 17:32:43.145278 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 12 17:32:43.145297 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 12 17:32:43.145312 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Sep 12 17:32:43.145327 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Sep 12 17:32:43.145342 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Sep 12 17:32:43.145357 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Sep 12 17:32:43.145372 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 12 17:32:43.145387 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 12 17:32:43.145411 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 12 17:32:43.145428 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 12 17:32:43.145444 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 12 17:32:43.145461 kernel: NX (Execute Disable) protection: active Sep 12 17:32:43.145478 kernel: APIC: Static calls initialized Sep 12 17:32:43.145495 kernel: efi: EFI v2.7 by EDK II Sep 12 17:32:43.145512 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Sep 12 17:32:43.145528 kernel: SMBIOS 2.4 present. Sep 12 17:32:43.145545 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 12 17:32:43.145562 kernel: Hypervisor detected: KVM Sep 12 17:32:43.145583 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:32:43.145599 kernel: kvm-clock: using sched offset of 13408503976 cycles Sep 12 17:32:43.145617 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:32:43.145634 kernel: tsc: Detected 2299.998 MHz processor Sep 12 17:32:43.145651 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:32:43.145668 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:32:43.145686 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 12 17:32:43.145703 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 12 17:32:43.145720 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:32:43.145742 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 12 17:32:43.145759 kernel: Using GB pages for direct mapping Sep 12 17:32:43.145775 kernel: Secure boot disabled Sep 12 17:32:43.145802 kernel: ACPI: Early table checksum verification disabled Sep 12 17:32:43.145819 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 12 17:32:43.145837 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 12 17:32:43.145854 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 12 17:32:43.145885 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 12 17:32:43.145908 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 12 17:32:43.145926 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 12 17:32:43.145945 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 12 17:32:43.145963 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 12 17:32:43.145981 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 12 17:32:43.146000 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 12 17:32:43.146031 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 12 17:32:43.146049 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 12 17:32:43.146067 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 12 17:32:43.146085 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 12 17:32:43.146103 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 12 17:32:43.146121 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 12 17:32:43.148252 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 12 17:32:43.148275 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 12 17:32:43.148293 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 12 17:32:43.148316 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 12 17:32:43.148332 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:32:43.148349 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 17:32:43.148366 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 17:32:43.148383 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 12 17:32:43.148399 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 12 17:32:43.148416 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Sep 12 17:32:43.148434 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Sep 12 17:32:43.148451 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Sep 12 17:32:43.148473 kernel: Zone ranges: Sep 12 17:32:43.148490 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:32:43.148508 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 12 17:32:43.148527 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 12 17:32:43.148545 kernel: Movable zone start for each node Sep 12 17:32:43.148564 kernel: Early memory node ranges Sep 12 17:32:43.148581 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 12 17:32:43.148601 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 12 17:32:43.148618 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Sep 12 17:32:43.148642 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 12 17:32:43.148659 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 12 17:32:43.148677 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 12 17:32:43.148695 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:32:43.148712 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 12 17:32:43.148731 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 12 17:32:43.148750 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 17:32:43.148767 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 12 17:32:43.148786 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 12 17:32:43.148808 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:32:43.148827 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:32:43.148846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:32:43.148872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:32:43.148891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:32:43.148909 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:32:43.148928 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:32:43.148944 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:32:43.148959 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 12 17:32:43.148980 kernel: Booting paravirtualized kernel on KVM Sep 12 17:32:43.148998 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:32:43.149014 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:32:43.149031 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:32:43.149049 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:32:43.149065 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:32:43.149083 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:32:43.149101 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:32:43.149121 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:32:43.149201 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:32:43.149217 kernel: random: crng init done Sep 12 17:32:43.149233 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 12 17:32:43.149250 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:32:43.149269 kernel: Fallback order for Node 0: 0 Sep 12 17:32:43.149287 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Sep 12 17:32:43.149305 kernel: Policy zone: Normal Sep 12 17:32:43.149322 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:32:43.149345 kernel: software IO TLB: area num 2. Sep 12 17:32:43.149362 kernel: Memory: 7513392K/7860584K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 346932K reserved, 0K cma-reserved) Sep 12 17:32:43.149380 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:32:43.149398 kernel: Kernel/User page tables isolation: enabled Sep 12 17:32:43.149417 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:32:43.149435 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:32:43.149454 kernel: Dynamic Preempt: voluntary Sep 12 17:32:43.149472 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:32:43.149492 kernel: rcu: RCU event tracing is enabled. Sep 12 17:32:43.149528 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:32:43.149548 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:32:43.149568 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:32:43.149592 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:32:43.149612 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:32:43.149630 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:32:43.149649 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:32:43.149678 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:32:43.149698 kernel: Console: colour dummy device 80x25 Sep 12 17:32:43.149722 kernel: printk: console [ttyS0] enabled Sep 12 17:32:43.149742 kernel: ACPI: Core revision 20230628 Sep 12 17:32:43.149762 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:32:43.149782 kernel: x2apic enabled Sep 12 17:32:43.149802 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:32:43.149822 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 12 17:32:43.149840 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 12 17:32:43.149860 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 12 17:32:43.149892 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 12 17:32:43.149912 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 12 17:32:43.149932 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:32:43.149952 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 12 17:32:43.149972 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 12 17:32:43.149992 kernel: Spectre V2 : Mitigation: IBRS Sep 12 17:32:43.150012 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:32:43.150031 kernel: RETBleed: Mitigation: IBRS Sep 12 17:32:43.150051 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:32:43.150081 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 12 17:32:43.150100 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:32:43.150120 kernel: MDS: Mitigation: Clear CPU buffers Sep 12 17:32:43.150383 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:32:43.150406 kernel: active return thunk: its_return_thunk Sep 12 17:32:43.150426 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:32:43.150447 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:32:43.150466 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:32:43.150616 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:32:43.150642 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:32:43.150659 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 12 17:32:43.150679 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:32:43.150700 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:32:43.150719 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:32:43.150739 kernel: landlock: Up and running. Sep 12 17:32:43.150892 kernel: SELinux: Initializing. Sep 12 17:32:43.150914 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:32:43.150934 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:32:43.150959 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 12 17:32:43.150979 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:32:43.151128 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:32:43.151199 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:32:43.151347 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 12 17:32:43.151366 kernel: signal: max sigframe size: 1776 Sep 12 17:32:43.151386 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:32:43.151407 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:32:43.151551 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:32:43.151576 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:32:43.151596 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:32:43.151616 kernel: .... node #0, CPUs: #1 Sep 12 17:32:43.151637 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 12 17:32:43.151669 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:32:43.151689 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:32:43.151708 kernel: smpboot: Max logical packages: 1 Sep 12 17:32:43.151728 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 12 17:32:43.151753 kernel: devtmpfs: initialized Sep 12 17:32:43.151773 kernel: x86/mm: Memory block size: 128MB Sep 12 17:32:43.151793 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 12 17:32:43.151812 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:32:43.151833 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:32:43.151853 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:32:43.151879 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:32:43.151898 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:32:43.151918 kernel: audit: type=2000 audit(1757698361.764:1): state=initialized audit_enabled=0 res=1 Sep 12 17:32:43.151941 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:32:43.151961 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:32:43.151981 kernel: cpuidle: using governor menu Sep 12 17:32:43.152001 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:32:43.152020 kernel: dca service started, version 1.12.1 Sep 12 17:32:43.152040 kernel: PCI: Using configuration type 1 for base access Sep 12 17:32:43.152060 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:32:43.152080 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:32:43.152100 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:32:43.152123 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:32:43.152165 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:32:43.152184 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:32:43.152204 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:32:43.152223 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:32:43.152242 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 12 17:32:43.152263 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:32:43.152283 kernel: ACPI: Interpreter enabled Sep 12 17:32:43.152303 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 17:32:43.152327 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:32:43.152347 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:32:43.152368 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 12 17:32:43.152388 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 12 17:32:43.152408 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:32:43.152681 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:32:43.152893 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:32:43.153117 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:32:43.153178 kernel: PCI host bridge to bus 0000:00 Sep 12 17:32:43.153374 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:32:43.153546 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:32:43.153713 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:32:43.153896 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 12 17:32:43.154061 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:32:43.154552 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 17:32:43.154777 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Sep 12 17:32:43.154982 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 12 17:32:43.155197 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 12 17:32:43.155398 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Sep 12 17:32:43.155597 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Sep 12 17:32:43.155803 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Sep 12 17:32:43.156004 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:32:43.156226 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Sep 12 17:32:43.156412 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Sep 12 17:32:43.156606 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:32:43.156792 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 12 17:32:43.156987 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Sep 12 17:32:43.157017 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:32:43.157037 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:32:43.157056 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:32:43.157075 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:32:43.157094 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:32:43.157113 kernel: iommu: Default domain type: Translated Sep 12 17:32:43.157132 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:32:43.157176 kernel: efivars: Registered efivars operations Sep 12 17:32:43.157195 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:32:43.157219 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:32:43.157238 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 12 17:32:43.157257 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 12 17:32:43.157275 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 12 17:32:43.157294 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 12 17:32:43.157313 kernel: vgaarb: loaded Sep 12 17:32:43.157332 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:32:43.157351 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:32:43.157370 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:32:43.157393 kernel: pnp: PnP ACPI init Sep 12 17:32:43.157412 kernel: pnp: PnP ACPI: found 7 devices Sep 12 17:32:43.157431 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:32:43.157451 kernel: NET: Registered PF_INET protocol family Sep 12 17:32:43.157470 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:32:43.157489 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 12 17:32:43.157508 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:32:43.157527 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:32:43.157551 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 12 17:32:43.159208 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 12 17:32:43.159230 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:32:43.159246 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:32:43.159263 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:32:43.159281 kernel: NET: Registered PF_XDP protocol family Sep 12 17:32:43.159490 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:32:43.159666 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:32:43.159914 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:32:43.160101 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 12 17:32:43.160313 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:32:43.160339 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:32:43.160360 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 17:32:43.160379 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 12 17:32:43.160399 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:32:43.160419 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 12 17:32:43.160438 kernel: clocksource: Switched to clocksource tsc Sep 12 17:32:43.160463 kernel: Initialise system trusted keyrings Sep 12 17:32:43.160482 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 12 17:32:43.160501 kernel: Key type asymmetric registered Sep 12 17:32:43.160520 kernel: Asymmetric key parser 'x509' registered Sep 12 17:32:43.160539 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:32:43.160559 kernel: io scheduler mq-deadline registered Sep 12 17:32:43.160578 kernel: io scheduler kyber registered Sep 12 17:32:43.160597 kernel: io scheduler bfq registered Sep 12 17:32:43.160616 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:32:43.160639 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 12 17:32:43.160882 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 12 17:32:43.160908 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 12 17:32:43.161094 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 12 17:32:43.161119 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 12 17:32:43.162523 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 12 17:32:43.162553 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:32:43.162702 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:32:43.162722 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 12 17:32:43.162748 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 12 17:32:43.162768 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 12 17:32:43.165726 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 12 17:32:43.165762 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:32:43.165783 kernel: i8042: Warning: Keylock active Sep 12 17:32:43.165802 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:32:43.165821 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:32:43.166022 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 12 17:32:43.166243 kernel: rtc_cmos 00:00: registered as rtc0 Sep 12 17:32:43.166414 kernel: rtc_cmos 00:00: setting system clock to 2025-09-12T17:32:42 UTC (1757698362) Sep 12 17:32:43.166586 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 12 17:32:43.166610 kernel: intel_pstate: CPU model not supported Sep 12 17:32:43.166628 kernel: pstore: Using crash dump compression: deflate Sep 12 17:32:43.166647 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:32:43.166666 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:32:43.166685 kernel: Segment Routing with IPv6 Sep 12 17:32:43.166710 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:32:43.166728 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:32:43.166757 kernel: Key type dns_resolver registered Sep 12 17:32:43.166776 kernel: IPI shorthand broadcast: enabled Sep 12 17:32:43.166794 kernel: sched_clock: Marking stable (935004781, 200996125)->(1244551583, -108550677) Sep 12 17:32:43.166812 kernel: registered taskstats version 1 Sep 12 17:32:43.166843 kernel: Loading compiled-in X.509 certificates Sep 12 17:32:43.166870 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:32:43.166889 kernel: Key type .fscrypt registered Sep 12 17:32:43.166917 kernel: Key type fscrypt-provisioning registered Sep 12 17:32:43.166935 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:32:43.166953 kernel: ima: No architecture policies found Sep 12 17:32:43.166971 kernel: clk: Disabling unused clocks Sep 12 17:32:43.166990 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:32:43.167010 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:32:43.167031 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:32:43.167050 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:32:43.167075 kernel: Run /init as init process Sep 12 17:32:43.167095 kernel: with arguments: Sep 12 17:32:43.167114 kernel: /init Sep 12 17:32:43.167133 kernel: with environment: Sep 12 17:32:43.170197 kernel: HOME=/ Sep 12 17:32:43.170218 kernel: TERM=linux Sep 12 17:32:43.170240 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:32:43.170270 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:32:43.170302 systemd[1]: Detected virtualization google. Sep 12 17:32:43.170323 systemd[1]: Detected architecture x86-64. Sep 12 17:32:43.170343 systemd[1]: Running in initrd. Sep 12 17:32:43.170363 systemd[1]: No hostname configured, using default hostname. Sep 12 17:32:43.170383 systemd[1]: Hostname set to . Sep 12 17:32:43.170403 systemd[1]: Initializing machine ID from random generator. Sep 12 17:32:43.170423 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:32:43.170444 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:32:43.170468 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:32:43.170490 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:32:43.170511 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:32:43.170531 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:32:43.170552 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:32:43.170576 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:32:43.170597 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:32:43.170619 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:32:43.170641 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:32:43.170682 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:32:43.170714 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:32:43.170736 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:32:43.170757 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:32:43.170783 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:32:43.170804 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:32:43.170826 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:32:43.170848 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:32:43.170876 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:32:43.170898 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:32:43.170919 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:32:43.170941 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:32:43.170962 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:32:43.170987 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:32:43.171009 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:32:43.171030 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:32:43.171052 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:32:43.171073 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:32:43.171095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:32:43.171169 systemd-journald[183]: Collecting audit messages is disabled. Sep 12 17:32:43.171221 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:32:43.171242 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:32:43.171264 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:32:43.171298 systemd-journald[183]: Journal started Sep 12 17:32:43.171341 systemd-journald[183]: Runtime Journal (/run/log/journal/dc393a6da73741bc92333e141355717a) is 8.0M, max 148.7M, 140.7M free. Sep 12 17:32:43.163348 systemd-modules-load[184]: Inserted module 'overlay' Sep 12 17:32:43.182393 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:32:43.186617 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:32:43.199646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:32:43.209538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:32:43.223351 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:32:43.223394 kernel: Bridge firewalling registered Sep 12 17:32:43.218341 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 12 17:32:43.220417 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:32:43.224234 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:32:43.241376 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:32:43.263430 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:32:43.265863 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:32:43.266737 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:32:43.293132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:32:43.297933 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:32:43.303620 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:32:43.308358 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:32:43.323377 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:32:43.340183 dracut-cmdline[215]: dracut-dracut-053 Sep 12 17:32:43.345122 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:32:43.386435 systemd-resolved[218]: Positive Trust Anchors: Sep 12 17:32:43.386463 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:32:43.386538 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:32:43.391857 systemd-resolved[218]: Defaulting to hostname 'linux'. Sep 12 17:32:43.393716 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:32:43.397702 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:32:43.465189 kernel: SCSI subsystem initialized Sep 12 17:32:43.477188 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:32:43.490172 kernel: iscsi: registered transport (tcp) Sep 12 17:32:43.516369 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:32:43.516458 kernel: QLogic iSCSI HBA Driver Sep 12 17:32:43.570813 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:32:43.577398 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:32:43.622934 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:32:43.623026 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:32:43.623062 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:32:43.670192 kernel: raid6: avx2x4 gen() 17816 MB/s Sep 12 17:32:43.687186 kernel: raid6: avx2x2 gen() 17516 MB/s Sep 12 17:32:43.713468 kernel: raid6: avx2x1 gen() 13390 MB/s Sep 12 17:32:43.713553 kernel: raid6: using algorithm avx2x4 gen() 17816 MB/s Sep 12 17:32:43.740407 kernel: raid6: .... xor() 7896 MB/s, rmw enabled Sep 12 17:32:43.740492 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:32:43.770187 kernel: xor: automatically using best checksumming function avx Sep 12 17:32:43.951185 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:32:43.965916 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:32:43.983391 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:32:44.034445 systemd-udevd[401]: Using default interface naming scheme 'v255'. Sep 12 17:32:44.041774 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:32:44.077502 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:32:44.097341 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Sep 12 17:32:44.135719 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:32:44.142399 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:32:44.265999 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:32:44.302538 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:32:44.355348 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:32:44.384166 kernel: scsi host0: Virtio SCSI HBA Sep 12 17:32:44.389760 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:32:44.409302 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 12 17:32:44.421352 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:32:44.451302 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:32:44.441766 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:32:44.490984 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:32:44.491068 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 12 17:32:44.491452 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 12 17:32:44.491692 kernel: AES CTR mode by8 optimization enabled Sep 12 17:32:44.505439 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:32:44.593725 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 12 17:32:44.594094 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 12 17:32:44.594346 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 17:32:44.594582 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:32:44.594608 kernel: GPT:17805311 != 25165823 Sep 12 17:32:44.594631 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:32:44.594663 kernel: GPT:17805311 != 25165823 Sep 12 17:32:44.594687 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:32:44.566022 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:32:44.621345 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:32:44.621388 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 12 17:32:44.566252 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:32:44.644253 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:32:44.655915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:32:44.711328 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (453) Sep 12 17:32:44.711375 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (451) Sep 12 17:32:44.656309 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:32:44.701060 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:32:44.727598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:32:44.762122 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:32:44.783488 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:32:44.803771 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 12 17:32:44.833964 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 12 17:32:44.839796 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 12 17:32:44.865368 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 12 17:32:44.894295 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 12 17:32:44.924460 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:32:44.941932 disk-uuid[542]: Primary Header is updated. Sep 12 17:32:44.941932 disk-uuid[542]: Secondary Entries is updated. Sep 12 17:32:44.941932 disk-uuid[542]: Secondary Header is updated. Sep 12 17:32:44.988888 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:32:44.988944 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:32:44.954417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:32:45.033494 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:32:46.015548 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:32:46.015637 disk-uuid[543]: The operation has completed successfully. Sep 12 17:32:46.097703 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:32:46.097874 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:32:46.122513 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:32:46.159017 sh[569]: Success Sep 12 17:32:46.182214 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:32:46.282763 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:32:46.290129 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:32:46.315854 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:32:46.369741 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:32:46.369838 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:32:46.369865 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:32:46.386265 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:32:46.386363 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:32:46.430195 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:32:46.439602 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:32:46.440606 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:32:46.446478 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:32:46.460596 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:32:46.516226 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:32:46.516319 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:32:46.532405 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:32:46.553113 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:32:46.553220 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:32:46.571869 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:32:46.589368 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:32:46.596897 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:32:46.613493 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:32:46.666505 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:32:46.701485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:32:46.733369 systemd-networkd[751]: lo: Link UP Sep 12 17:32:46.733382 systemd-networkd[751]: lo: Gained carrier Sep 12 17:32:46.735667 systemd-networkd[751]: Enumeration completed Sep 12 17:32:46.736590 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:32:46.736845 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:32:46.736852 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:32:46.741740 systemd-networkd[751]: eth0: Link UP Sep 12 17:32:46.741748 systemd-networkd[751]: eth0: Gained carrier Sep 12 17:32:46.741763 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:32:46.747857 systemd[1]: Reached target network.target - Network. Sep 12 17:32:46.847843 ignition[698]: Ignition 2.19.0 Sep 12 17:32:46.761468 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.49/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 12 17:32:46.847854 ignition[698]: Stage: fetch-offline Sep 12 17:32:46.850600 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:32:46.847913 ignition[698]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:46.857418 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:32:46.847927 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:32:46.904583 unknown[761]: fetched base config from "system" Sep 12 17:32:46.848218 ignition[698]: parsed url from cmdline: "" Sep 12 17:32:46.904595 unknown[761]: fetched base config from "system" Sep 12 17:32:46.848224 ignition[698]: no config URL provided Sep 12 17:32:46.904607 unknown[761]: fetched user config from "gcp" Sep 12 17:32:46.848232 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:32:46.907223 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:32:46.848246 ignition[698]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:32:46.931490 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:32:46.848257 ignition[698]: failed to fetch config: resource requires networking Sep 12 17:32:46.993512 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:32:46.848627 ignition[698]: Ignition finished successfully Sep 12 17:32:47.019486 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:32:46.895535 ignition[761]: Ignition 2.19.0 Sep 12 17:32:47.061992 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:32:46.895546 ignition[761]: Stage: fetch Sep 12 17:32:47.080317 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:32:46.895827 ignition[761]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:47.086554 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:32:46.895845 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:32:47.117393 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:32:46.896001 ignition[761]: parsed url from cmdline: "" Sep 12 17:32:47.134504 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:32:46.896008 ignition[761]: no config URL provided Sep 12 17:32:47.152515 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:32:46.896016 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:32:47.164553 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:32:46.896030 ignition[761]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:32:46.896060 ignition[761]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 12 17:32:46.899654 ignition[761]: GET result: OK Sep 12 17:32:46.899914 ignition[761]: parsing config with SHA512: 33b6da2ddc79394bfbf121402fbc9d9c36a8fa9efa3eb00888dbc36b841da27e46a6e9c19d1009423b4ab59e8cd43cd541760cf1efc33fa41e9222f67072f250 Sep 12 17:32:46.905123 ignition[761]: fetch: fetch complete Sep 12 17:32:46.905131 ignition[761]: fetch: fetch passed Sep 12 17:32:46.905248 ignition[761]: Ignition finished successfully Sep 12 17:32:46.989926 ignition[768]: Ignition 2.19.0 Sep 12 17:32:46.989939 ignition[768]: Stage: kargs Sep 12 17:32:46.990245 ignition[768]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:46.990267 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:32:46.991948 ignition[768]: kargs: kargs passed Sep 12 17:32:46.992026 ignition[768]: Ignition finished successfully Sep 12 17:32:47.059215 ignition[774]: Ignition 2.19.0 Sep 12 17:32:47.059224 ignition[774]: Stage: disks Sep 12 17:32:47.059444 ignition[774]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:47.059457 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:32:47.060707 ignition[774]: disks: disks passed Sep 12 17:32:47.060772 ignition[774]: Ignition finished successfully Sep 12 17:32:47.236752 systemd-fsck[782]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 12 17:32:47.374439 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:32:47.405401 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:32:47.532193 kernel: EXT4-fs (sda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:32:47.533778 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:32:47.534734 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:32:47.556461 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:32:47.593351 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:32:47.667422 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (790) Sep 12 17:32:47.667480 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:32:47.667505 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:32:47.667521 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:32:47.667536 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:32:47.667550 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:32:47.633965 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:32:47.634049 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:32:47.634102 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:32:47.649707 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:32:47.677183 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:32:47.706422 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:32:47.877588 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:32:47.888496 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:32:47.898764 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:32:47.909790 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:32:48.086671 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:32:48.093325 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:32:48.133796 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:32:48.163214 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:32:48.146460 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:32:48.192998 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:32:48.206058 ignition[902]: INFO : Ignition 2.19.0 Sep 12 17:32:48.206058 ignition[902]: INFO : Stage: mount Sep 12 17:32:48.235378 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:48.235378 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:32:48.235378 ignition[902]: INFO : mount: mount passed Sep 12 17:32:48.235378 ignition[902]: INFO : Ignition finished successfully Sep 12 17:32:48.213824 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:32:48.227468 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:32:48.371428 systemd-networkd[751]: eth0: Gained IPv6LL Sep 12 17:32:48.541494 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:32:48.592195 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (914) Sep 12 17:32:48.612183 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:32:48.612282 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:32:48.612309 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:32:48.635606 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:32:48.635700 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:32:48.639316 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:32:48.679599 ignition[931]: INFO : Ignition 2.19.0 Sep 12 17:32:48.679599 ignition[931]: INFO : Stage: files Sep 12 17:32:48.696862 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:48.696862 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:32:48.696862 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:32:48.696862 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:32:48.696862 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:32:48.696862 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:32:48.696862 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:32:48.696862 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:32:48.696862 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:32:48.696862 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:32:48.690983 unknown[931]: wrote ssh authorized keys file for user: core Sep 12 17:33:18.695879 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.246.38:443: i/o timeout Sep 12 17:33:18.896446 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #2 Sep 12 17:33:48.897265 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.246.38:443: i/o timeout Sep 12 17:33:49.297988 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #3 Sep 12 17:34:19.298541 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.246.38:443: i/o timeout Sep 12 17:34:20.099629 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #4 Sep 12 17:34:50.100535 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.246.38:443: i/o timeout Sep 12 17:34:51.700836 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #5 Sep 12 17:35:21.704050 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.253.38:443: i/o timeout Sep 12 17:35:24.906503 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #6 Sep 12 17:35:54.912933 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.253.38:443: i/o timeout Sep 12 17:35:59.913753 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #7 Sep 12 17:36:29.914467 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.246.38:443: i/o timeout Sep 12 17:36:34.914779 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #8 Sep 12 17:37:04.915444 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.246.38:443: i/o timeout Sep 12 17:37:09.915909 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #9 Sep 12 17:37:10.046515 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:37:11.172464 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:37:11.592102 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:37:11.987956 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:37:11.987956 ignition[931]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:37:12.027384 ignition[931]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:37:12.027384 ignition[931]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:37:12.027384 ignition[931]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:37:12.027384 ignition[931]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:37:12.027384 ignition[931]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:37:12.027384 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:37:12.027384 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:37:12.027384 ignition[931]: INFO : files: files passed Sep 12 17:37:12.027384 ignition[931]: INFO : Ignition finished successfully Sep 12 17:37:11.993197 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:37:12.013483 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:37:12.045215 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:37:12.094151 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:37:12.262547 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:37:12.262547 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:37:12.094329 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:37:12.318395 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:37:12.108997 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:37:12.120687 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:37:12.150461 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:37:12.223913 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:37:12.224045 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:37:12.242177 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:37:12.262391 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:37:12.279531 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:37:12.285504 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:37:12.354939 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:37:12.382470 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:37:12.429945 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:37:12.451577 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:37:12.475656 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:37:12.495663 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:37:12.495923 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:37:12.530653 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:37:12.549615 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:37:12.567589 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:37:12.585527 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:37:12.605614 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:37:12.627604 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:37:12.647532 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:37:12.666782 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:37:12.686536 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:37:12.706533 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:37:12.725479 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:37:12.725717 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:37:12.757648 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:37:12.777543 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:37:12.799477 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:37:12.799634 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:37:12.819483 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:37:12.819768 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:37:12.951329 ignition[983]: INFO : Ignition 2.19.0 Sep 12 17:37:12.951329 ignition[983]: INFO : Stage: umount Sep 12 17:37:12.951329 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:37:12.951329 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:37:12.951329 ignition[983]: INFO : umount: umount passed Sep 12 17:37:12.951329 ignition[983]: INFO : Ignition finished successfully Sep 12 17:37:12.848852 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:37:12.849345 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:37:12.869609 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:37:12.869865 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:37:12.898436 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:37:12.922399 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:37:12.922655 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:37:12.948552 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:37:12.960422 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:37:12.960698 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:37:12.979585 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:37:12.979788 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:37:13.002288 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:37:13.003327 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:37:13.003448 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:37:13.018985 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:37:13.019111 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:37:13.040031 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:37:13.040282 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:37:13.056520 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:37:13.056612 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:37:13.075449 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:37:13.075544 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:37:13.095445 systemd[1]: Stopped target network.target - Network. Sep 12 17:37:13.116345 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:37:13.116474 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:37:13.136446 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:37:13.152375 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:37:13.156297 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:37:13.174344 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:37:13.174495 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:37:13.200417 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:37:13.200517 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:37:13.221428 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:37:13.221530 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:37:13.240398 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:37:13.240513 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:37:13.260458 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:37:13.260566 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:37:13.281451 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:37:13.281557 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:37:13.301757 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:37:13.306261 systemd-networkd[751]: eth0: DHCPv6 lease lost Sep 12 17:37:13.321697 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:37:13.331449 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:37:13.331598 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:37:13.350968 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:37:13.351172 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:37:13.903349 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 12 17:37:13.366172 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:37:13.366323 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:37:13.389119 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:37:13.389225 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:37:13.411312 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:37:13.440481 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:37:13.440700 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:37:13.461479 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:37:13.461590 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:37:13.480481 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:37:13.480587 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:37:13.499452 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:37:13.499566 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:37:13.520652 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:37:13.540940 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:37:13.541116 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:37:13.568728 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:37:13.568900 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:37:13.581433 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:37:13.581585 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:37:13.591372 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:37:13.591478 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:37:13.619345 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:37:13.619496 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:37:13.651376 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:37:13.651599 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:37:13.684492 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:37:13.687488 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:37:13.687577 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:37:13.734531 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:37:13.734756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:37:13.754167 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:37:13.754318 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:37:13.773965 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:37:13.774086 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:37:13.784986 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:37:13.810439 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:37:13.855385 systemd[1]: Switching root. Sep 12 17:37:14.287336 systemd-journald[183]: Journal stopped Sep 12 17:32:43.145129 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:32:43.145197 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:32:43.145217 kernel: BIOS-provided physical RAM map: Sep 12 17:32:43.145232 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 12 17:32:43.145246 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 12 17:32:43.145260 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 12 17:32:43.145278 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 12 17:32:43.145297 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 12 17:32:43.145312 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Sep 12 17:32:43.145327 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Sep 12 17:32:43.145342 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Sep 12 17:32:43.145357 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Sep 12 17:32:43.145372 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 12 17:32:43.145387 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 12 17:32:43.145411 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 12 17:32:43.145428 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 12 17:32:43.145444 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 12 17:32:43.145461 kernel: NX (Execute Disable) protection: active Sep 12 17:32:43.145478 kernel: APIC: Static calls initialized Sep 12 17:32:43.145495 kernel: efi: EFI v2.7 by EDK II Sep 12 17:32:43.145512 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Sep 12 17:32:43.145528 kernel: SMBIOS 2.4 present. Sep 12 17:32:43.145545 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 12 17:32:43.145562 kernel: Hypervisor detected: KVM Sep 12 17:32:43.145583 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:32:43.145599 kernel: kvm-clock: using sched offset of 13408503976 cycles Sep 12 17:32:43.145617 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:32:43.145634 kernel: tsc: Detected 2299.998 MHz processor Sep 12 17:32:43.145651 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:32:43.145668 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:32:43.145686 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 12 17:32:43.145703 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 12 17:32:43.145720 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:32:43.145742 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 12 17:32:43.145759 kernel: Using GB pages for direct mapping Sep 12 17:32:43.145775 kernel: Secure boot disabled Sep 12 17:32:43.145802 kernel: ACPI: Early table checksum verification disabled Sep 12 17:32:43.145819 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 12 17:32:43.145837 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 12 17:32:43.145854 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 12 17:32:43.145885 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 12 17:32:43.145908 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 12 17:32:43.145926 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 12 17:32:43.145945 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 12 17:32:43.145963 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 12 17:32:43.145981 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 12 17:32:43.146000 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 12 17:32:43.146031 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 12 17:32:43.146049 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 12 17:32:43.146067 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 12 17:32:43.146085 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 12 17:32:43.146103 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 12 17:32:43.146121 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 12 17:32:43.148252 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 12 17:32:43.148275 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 12 17:32:43.148293 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 12 17:32:43.148316 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 12 17:32:43.148332 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:32:43.148349 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 17:32:43.148366 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 17:32:43.148383 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 12 17:32:43.148399 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 12 17:32:43.148416 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Sep 12 17:32:43.148434 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Sep 12 17:32:43.148451 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Sep 12 17:32:43.148473 kernel: Zone ranges: Sep 12 17:32:43.148490 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:32:43.148508 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 12 17:32:43.148527 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 12 17:32:43.148545 kernel: Movable zone start for each node Sep 12 17:32:43.148564 kernel: Early memory node ranges Sep 12 17:32:43.148581 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 12 17:32:43.148601 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 12 17:32:43.148618 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Sep 12 17:32:43.148642 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 12 17:32:43.148659 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 12 17:32:43.148677 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 12 17:32:43.148695 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:32:43.148712 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 12 17:32:43.148731 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 12 17:32:43.148750 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 17:32:43.148767 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 12 17:32:43.148786 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 12 17:32:43.148808 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:32:43.148827 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:32:43.148846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:32:43.148872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:32:43.148891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:32:43.148909 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:32:43.148928 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:32:43.148944 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:32:43.148959 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 12 17:32:43.148980 kernel: Booting paravirtualized kernel on KVM Sep 12 17:32:43.148998 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:32:43.149014 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:32:43.149031 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:32:43.149049 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:32:43.149065 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:32:43.149083 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:32:43.149101 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:32:43.149121 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:32:43.149201 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:32:43.149217 kernel: random: crng init done Sep 12 17:32:43.149233 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 12 17:32:43.149250 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:32:43.149269 kernel: Fallback order for Node 0: 0 Sep 12 17:32:43.149287 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Sep 12 17:32:43.149305 kernel: Policy zone: Normal Sep 12 17:32:43.149322 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:32:43.149345 kernel: software IO TLB: area num 2. Sep 12 17:32:43.149362 kernel: Memory: 7513392K/7860584K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 346932K reserved, 0K cma-reserved) Sep 12 17:32:43.149380 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:32:43.149398 kernel: Kernel/User page tables isolation: enabled Sep 12 17:32:43.149417 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:32:43.149435 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:32:43.149454 kernel: Dynamic Preempt: voluntary Sep 12 17:32:43.149472 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:32:43.149492 kernel: rcu: RCU event tracing is enabled. Sep 12 17:32:43.149528 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:32:43.149548 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:32:43.149568 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:32:43.149592 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:32:43.149612 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:32:43.149630 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:32:43.149649 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:32:43.149678 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:32:43.149698 kernel: Console: colour dummy device 80x25 Sep 12 17:32:43.149722 kernel: printk: console [ttyS0] enabled Sep 12 17:32:43.149742 kernel: ACPI: Core revision 20230628 Sep 12 17:32:43.149762 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:32:43.149782 kernel: x2apic enabled Sep 12 17:32:43.149802 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:32:43.149822 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 12 17:32:43.149840 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 12 17:32:43.149860 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 12 17:32:43.149892 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 12 17:32:43.149912 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 12 17:32:43.149932 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:32:43.149952 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 12 17:32:43.149972 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 12 17:32:43.149992 kernel: Spectre V2 : Mitigation: IBRS Sep 12 17:32:43.150012 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:32:43.150031 kernel: RETBleed: Mitigation: IBRS Sep 12 17:32:43.150051 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:32:43.150081 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 12 17:32:43.150100 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:32:43.150120 kernel: MDS: Mitigation: Clear CPU buffers Sep 12 17:32:43.150383 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:32:43.150406 kernel: active return thunk: its_return_thunk Sep 12 17:32:43.150426 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:32:43.150447 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:32:43.150466 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:32:43.150616 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:32:43.150642 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:32:43.150659 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 12 17:32:43.150679 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:32:43.150700 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:32:43.150719 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:32:43.150739 kernel: landlock: Up and running. Sep 12 17:32:43.150892 kernel: SELinux: Initializing. Sep 12 17:32:43.150914 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:32:43.150934 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:32:43.150959 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 12 17:32:43.150979 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:32:43.151128 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:32:43.151199 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:32:43.151347 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 12 17:32:43.151366 kernel: signal: max sigframe size: 1776 Sep 12 17:32:43.151386 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:32:43.151407 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:32:43.151551 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:32:43.151576 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:32:43.151596 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:32:43.151616 kernel: .... node #0, CPUs: #1 Sep 12 17:32:43.151637 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 12 17:32:43.151669 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:32:43.151689 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:32:43.151708 kernel: smpboot: Max logical packages: 1 Sep 12 17:32:43.151728 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 12 17:32:43.151753 kernel: devtmpfs: initialized Sep 12 17:32:43.151773 kernel: x86/mm: Memory block size: 128MB Sep 12 17:32:43.151793 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 12 17:32:43.151812 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:32:43.151833 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:32:43.151853 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:32:43.151879 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:32:43.151898 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:32:43.151918 kernel: audit: type=2000 audit(1757698361.764:1): state=initialized audit_enabled=0 res=1 Sep 12 17:32:43.151941 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:32:43.151961 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:32:43.151981 kernel: cpuidle: using governor menu Sep 12 17:32:43.152001 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:32:43.152020 kernel: dca service started, version 1.12.1 Sep 12 17:32:43.152040 kernel: PCI: Using configuration type 1 for base access Sep 12 17:32:43.152060 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:32:43.152080 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:32:43.152100 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:32:43.152123 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:32:43.152165 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:32:43.152184 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:32:43.152204 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:32:43.152223 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:32:43.152242 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 12 17:32:43.152263 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:32:43.152283 kernel: ACPI: Interpreter enabled Sep 12 17:32:43.152303 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 17:32:43.152327 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:32:43.152347 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:32:43.152368 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 12 17:32:43.152388 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 12 17:32:43.152408 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:32:43.152681 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:32:43.152893 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:32:43.153117 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:32:43.153178 kernel: PCI host bridge to bus 0000:00 Sep 12 17:32:43.153374 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:32:43.153546 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:32:43.153713 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:32:43.153896 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 12 17:32:43.154061 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:32:43.154552 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 17:32:43.154777 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Sep 12 17:32:43.154982 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 12 17:32:43.155197 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 12 17:32:43.155398 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Sep 12 17:32:43.155597 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Sep 12 17:32:43.155803 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Sep 12 17:32:43.156004 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:32:43.156226 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Sep 12 17:32:43.156412 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Sep 12 17:32:43.156606 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:32:43.156792 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 12 17:32:43.156987 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Sep 12 17:32:43.157017 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:32:43.157037 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:32:43.157056 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:32:43.157075 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:32:43.157094 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:32:43.157113 kernel: iommu: Default domain type: Translated Sep 12 17:32:43.157132 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:32:43.157176 kernel: efivars: Registered efivars operations Sep 12 17:32:43.157195 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:32:43.157219 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:32:43.157238 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 12 17:32:43.157257 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 12 17:32:43.157275 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 12 17:32:43.157294 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 12 17:32:43.157313 kernel: vgaarb: loaded Sep 12 17:32:43.157332 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:32:43.157351 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:32:43.157370 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:32:43.157393 kernel: pnp: PnP ACPI init Sep 12 17:32:43.157412 kernel: pnp: PnP ACPI: found 7 devices Sep 12 17:32:43.157431 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:32:43.157451 kernel: NET: Registered PF_INET protocol family Sep 12 17:32:43.157470 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:32:43.157489 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 12 17:32:43.157508 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:32:43.157527 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:32:43.157551 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 12 17:32:43.159208 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 12 17:32:43.159230 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:32:43.159246 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:32:43.159263 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:32:43.159281 kernel: NET: Registered PF_XDP protocol family Sep 12 17:32:43.159490 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:32:43.159666 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:32:43.159914 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:32:43.160101 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 12 17:32:43.160313 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:32:43.160339 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:32:43.160360 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 17:32:43.160379 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 12 17:32:43.160399 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:32:43.160419 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 12 17:32:43.160438 kernel: clocksource: Switched to clocksource tsc Sep 12 17:32:43.160463 kernel: Initialise system trusted keyrings Sep 12 17:32:43.160482 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 12 17:32:43.160501 kernel: Key type asymmetric registered Sep 12 17:32:43.160520 kernel: Asymmetric key parser 'x509' registered Sep 12 17:32:43.160539 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:32:43.160559 kernel: io scheduler mq-deadline registered Sep 12 17:32:43.160578 kernel: io scheduler kyber registered Sep 12 17:32:43.160597 kernel: io scheduler bfq registered Sep 12 17:32:43.160616 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:32:43.160639 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 12 17:32:43.160882 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 12 17:32:43.160908 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 12 17:32:43.161094 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 12 17:32:43.161119 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 12 17:32:43.162523 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 12 17:32:43.162553 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:32:43.162702 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:32:43.162722 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 12 17:32:43.162748 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 12 17:32:43.162768 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 12 17:32:43.165726 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 12 17:32:43.165762 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:32:43.165783 kernel: i8042: Warning: Keylock active Sep 12 17:32:43.165802 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:32:43.165821 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:32:43.166022 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 12 17:32:43.166243 kernel: rtc_cmos 00:00: registered as rtc0 Sep 12 17:32:43.166414 kernel: rtc_cmos 00:00: setting system clock to 2025-09-12T17:32:42 UTC (1757698362) Sep 12 17:32:43.166586 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 12 17:32:43.166610 kernel: intel_pstate: CPU model not supported Sep 12 17:32:43.166628 kernel: pstore: Using crash dump compression: deflate Sep 12 17:32:43.166647 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:32:43.166666 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:32:43.166685 kernel: Segment Routing with IPv6 Sep 12 17:32:43.166710 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:32:43.166728 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:32:43.166757 kernel: Key type dns_resolver registered Sep 12 17:32:43.166776 kernel: IPI shorthand broadcast: enabled Sep 12 17:32:43.166794 kernel: sched_clock: Marking stable (935004781, 200996125)->(1244551583, -108550677) Sep 12 17:32:43.166812 kernel: registered taskstats version 1 Sep 12 17:32:43.166843 kernel: Loading compiled-in X.509 certificates Sep 12 17:32:43.166870 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:32:43.166889 kernel: Key type .fscrypt registered Sep 12 17:32:43.166917 kernel: Key type fscrypt-provisioning registered Sep 12 17:32:43.166935 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:32:43.166953 kernel: ima: No architecture policies found Sep 12 17:32:43.166971 kernel: clk: Disabling unused clocks Sep 12 17:32:43.166990 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:32:43.167010 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:32:43.167031 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:32:43.167050 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:32:43.167075 kernel: Run /init as init process Sep 12 17:32:43.167095 kernel: with arguments: Sep 12 17:32:43.167114 kernel: /init Sep 12 17:32:43.167133 kernel: with environment: Sep 12 17:32:43.170197 kernel: HOME=/ Sep 12 17:32:43.170218 kernel: TERM=linux Sep 12 17:32:43.170240 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:32:43.170270 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:32:43.170302 systemd[1]: Detected virtualization google. Sep 12 17:32:43.170323 systemd[1]: Detected architecture x86-64. Sep 12 17:32:43.170343 systemd[1]: Running in initrd. Sep 12 17:32:43.170363 systemd[1]: No hostname configured, using default hostname. Sep 12 17:32:43.170383 systemd[1]: Hostname set to . Sep 12 17:32:43.170403 systemd[1]: Initializing machine ID from random generator. Sep 12 17:32:43.170423 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:32:43.170444 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:32:43.170468 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:32:43.170490 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:32:43.170511 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:32:43.170531 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:32:43.170552 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:32:43.170576 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:32:43.170597 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:32:43.170619 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:32:43.170641 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:32:43.170682 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:32:43.170714 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:32:43.170736 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:32:43.170757 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:32:43.170783 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:32:43.170804 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:32:43.170826 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:32:43.170848 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:32:43.170876 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:32:43.170898 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:32:43.170919 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:32:43.170941 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:32:43.170962 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:32:43.170987 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:32:43.171009 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:32:43.171030 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:32:43.171052 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:32:43.171073 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:32:43.171095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:32:43.171169 systemd-journald[183]: Collecting audit messages is disabled. Sep 12 17:32:43.171221 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:32:43.171242 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:32:43.171264 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:32:43.171298 systemd-journald[183]: Journal started Sep 12 17:32:43.171341 systemd-journald[183]: Runtime Journal (/run/log/journal/dc393a6da73741bc92333e141355717a) is 8.0M, max 148.7M, 140.7M free. Sep 12 17:32:43.163348 systemd-modules-load[184]: Inserted module 'overlay' Sep 12 17:32:43.182393 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:32:43.186617 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:32:43.199646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:32:43.209538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:32:43.223351 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:32:43.223394 kernel: Bridge firewalling registered Sep 12 17:32:43.218341 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 12 17:32:43.220417 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:32:43.224234 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:32:43.241376 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:32:43.263430 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:32:43.265863 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:32:43.266737 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:32:43.293132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:32:43.297933 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:32:43.303620 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:32:43.308358 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:32:43.323377 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:32:43.340183 dracut-cmdline[215]: dracut-dracut-053 Sep 12 17:32:43.345122 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:32:43.386435 systemd-resolved[218]: Positive Trust Anchors: Sep 12 17:32:43.386463 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:32:43.386538 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:32:43.391857 systemd-resolved[218]: Defaulting to hostname 'linux'. Sep 12 17:32:43.393716 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:32:43.397702 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:32:43.465189 kernel: SCSI subsystem initialized Sep 12 17:32:43.477188 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:32:43.490172 kernel: iscsi: registered transport (tcp) Sep 12 17:32:43.516369 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:32:43.516458 kernel: QLogic iSCSI HBA Driver Sep 12 17:32:43.570813 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:32:43.577398 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:32:43.622934 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:32:43.623026 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:32:43.623062 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:32:43.670192 kernel: raid6: avx2x4 gen() 17816 MB/s Sep 12 17:32:43.687186 kernel: raid6: avx2x2 gen() 17516 MB/s Sep 12 17:32:43.713468 kernel: raid6: avx2x1 gen() 13390 MB/s Sep 12 17:32:43.713553 kernel: raid6: using algorithm avx2x4 gen() 17816 MB/s Sep 12 17:32:43.740407 kernel: raid6: .... xor() 7896 MB/s, rmw enabled Sep 12 17:32:43.740492 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:32:43.770187 kernel: xor: automatically using best checksumming function avx Sep 12 17:32:43.951185 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:32:43.965916 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:32:43.983391 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:32:44.034445 systemd-udevd[401]: Using default interface naming scheme 'v255'. Sep 12 17:32:44.041774 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:32:44.077502 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:32:44.097341 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Sep 12 17:32:44.135719 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:32:44.142399 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:32:44.265999 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:32:44.302538 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:32:44.355348 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:32:44.384166 kernel: scsi host0: Virtio SCSI HBA Sep 12 17:32:44.389760 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:32:44.409302 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 12 17:32:44.421352 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:32:44.451302 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:32:44.441766 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:32:44.490984 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:32:44.491068 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 12 17:32:44.491452 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 12 17:32:44.491692 kernel: AES CTR mode by8 optimization enabled Sep 12 17:32:44.505439 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:32:44.593725 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 12 17:32:44.594094 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 12 17:32:44.594346 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 17:32:44.594582 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:32:44.594608 kernel: GPT:17805311 != 25165823 Sep 12 17:32:44.594631 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:32:44.594663 kernel: GPT:17805311 != 25165823 Sep 12 17:32:44.594687 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:32:44.566022 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:32:44.621345 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:32:44.621388 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 12 17:32:44.566252 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:32:44.644253 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:32:44.655915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:32:44.711328 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (453) Sep 12 17:32:44.711375 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (451) Sep 12 17:32:44.656309 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:32:44.701060 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:32:44.727598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:32:44.762122 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:32:44.783488 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:32:44.803771 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 12 17:32:44.833964 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 12 17:32:44.839796 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 12 17:32:44.865368 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 12 17:32:44.894295 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 12 17:32:44.924460 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:32:44.941932 disk-uuid[542]: Primary Header is updated. Sep 12 17:32:44.941932 disk-uuid[542]: Secondary Entries is updated. Sep 12 17:32:44.941932 disk-uuid[542]: Secondary Header is updated. Sep 12 17:32:44.988888 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:32:44.988944 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:32:44.954417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:32:45.033494 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:32:46.015548 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:32:46.015637 disk-uuid[543]: The operation has completed successfully. Sep 12 17:32:46.097703 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:32:46.097874 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:32:46.122513 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:32:46.159017 sh[569]: Success Sep 12 17:32:46.182214 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:32:46.282763 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:32:46.290129 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:32:46.315854 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:32:46.369741 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:32:46.369838 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:32:46.369865 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:32:46.386265 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:32:46.386363 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:32:46.430195 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:32:46.439602 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:32:46.440606 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:32:46.446478 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:32:46.460596 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:32:46.516226 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:32:46.516319 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:32:46.532405 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:32:46.553113 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:32:46.553220 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:32:46.571869 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:32:46.589368 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:32:46.596897 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:32:46.613493 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:32:46.666505 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:32:46.701485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:32:46.733369 systemd-networkd[751]: lo: Link UP Sep 12 17:32:46.733382 systemd-networkd[751]: lo: Gained carrier Sep 12 17:32:46.735667 systemd-networkd[751]: Enumeration completed Sep 12 17:32:46.736590 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:32:46.736845 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:32:46.736852 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:32:46.741740 systemd-networkd[751]: eth0: Link UP Sep 12 17:32:46.741748 systemd-networkd[751]: eth0: Gained carrier Sep 12 17:32:46.741763 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:32:46.747857 systemd[1]: Reached target network.target - Network. Sep 12 17:32:46.847843 ignition[698]: Ignition 2.19.0 Sep 12 17:32:46.761468 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.49/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 12 17:32:46.847854 ignition[698]: Stage: fetch-offline Sep 12 17:32:46.850600 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:32:46.847913 ignition[698]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:46.857418 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:32:46.847927 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:32:46.904583 unknown[761]: fetched base config from "system" Sep 12 17:32:46.848218 ignition[698]: parsed url from cmdline: "" Sep 12 17:32:46.904595 unknown[761]: fetched base config from "system" Sep 12 17:32:46.848224 ignition[698]: no config URL provided Sep 12 17:32:46.904607 unknown[761]: fetched user config from "gcp" Sep 12 17:32:46.848232 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:32:46.907223 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:32:46.848246 ignition[698]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:32:46.931490 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:32:46.848257 ignition[698]: failed to fetch config: resource requires networking Sep 12 17:32:46.993512 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:32:46.848627 ignition[698]: Ignition finished successfully Sep 12 17:32:47.019486 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:32:46.895535 ignition[761]: Ignition 2.19.0 Sep 12 17:32:47.061992 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:32:46.895546 ignition[761]: Stage: fetch Sep 12 17:32:47.080317 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:32:46.895827 ignition[761]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:47.086554 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:32:46.895845 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:32:47.117393 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:32:46.896001 ignition[761]: parsed url from cmdline: "" Sep 12 17:32:47.134504 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:32:46.896008 ignition[761]: no config URL provided Sep 12 17:32:47.152515 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:32:46.896016 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:32:47.164553 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:32:46.896030 ignition[761]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:32:46.896060 ignition[761]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 12 17:32:46.899654 ignition[761]: GET result: OK Sep 12 17:32:46.899914 ignition[761]: parsing config with SHA512: 33b6da2ddc79394bfbf121402fbc9d9c36a8fa9efa3eb00888dbc36b841da27e46a6e9c19d1009423b4ab59e8cd43cd541760cf1efc33fa41e9222f67072f250 Sep 12 17:32:46.905123 ignition[761]: fetch: fetch complete Sep 12 17:32:46.905131 ignition[761]: fetch: fetch passed Sep 12 17:32:46.905248 ignition[761]: Ignition finished successfully Sep 12 17:32:46.989926 ignition[768]: Ignition 2.19.0 Sep 12 17:32:46.989939 ignition[768]: Stage: kargs Sep 12 17:32:46.990245 ignition[768]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:46.990267 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:32:46.991948 ignition[768]: kargs: kargs passed Sep 12 17:32:46.992026 ignition[768]: Ignition finished successfully Sep 12 17:32:47.059215 ignition[774]: Ignition 2.19.0 Sep 12 17:32:47.059224 ignition[774]: Stage: disks Sep 12 17:32:47.059444 ignition[774]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:47.059457 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:32:47.060707 ignition[774]: disks: disks passed Sep 12 17:32:47.060772 ignition[774]: Ignition finished successfully Sep 12 17:32:47.236752 systemd-fsck[782]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 12 17:32:47.374439 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:32:47.405401 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:32:47.532193 kernel: EXT4-fs (sda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:32:47.533778 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:32:47.534734 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:32:47.556461 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:32:47.593351 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:32:47.667422 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (790) Sep 12 17:32:47.667480 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:32:47.667505 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:32:47.667521 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:32:47.667536 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:32:47.667550 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:32:47.633965 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:32:47.634049 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:32:47.634102 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:32:47.649707 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:32:47.677183 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:32:47.706422 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:32:47.877588 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:32:47.888496 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:32:47.898764 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:32:47.909790 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:32:48.086671 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:32:48.093325 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:32:48.133796 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:32:48.163214 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:32:48.146460 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:32:48.192998 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:32:48.206058 ignition[902]: INFO : Ignition 2.19.0 Sep 12 17:32:48.206058 ignition[902]: INFO : Stage: mount Sep 12 17:32:48.235378 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:48.235378 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:32:48.235378 ignition[902]: INFO : mount: mount passed Sep 12 17:32:48.235378 ignition[902]: INFO : Ignition finished successfully Sep 12 17:32:48.213824 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:32:48.227468 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:32:48.371428 systemd-networkd[751]: eth0: Gained IPv6LL Sep 12 17:32:48.541494 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:32:48.592195 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (914) Sep 12 17:32:48.612183 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:32:48.612282 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:32:48.612309 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:32:48.635606 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:32:48.635700 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:32:48.639316 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:32:48.679599 ignition[931]: INFO : Ignition 2.19.0 Sep 12 17:32:48.679599 ignition[931]: INFO : Stage: files Sep 12 17:32:48.696862 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:32:48.696862 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:32:48.696862 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:32:48.696862 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:32:48.696862 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:32:48.696862 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:32:48.696862 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:32:48.696862 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:32:48.696862 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:32:48.696862 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:32:48.690983 unknown[931]: wrote ssh authorized keys file for user: core Sep 12 17:33:18.695879 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.246.38:443: i/o timeout Sep 12 17:33:18.896446 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #2 Sep 12 17:33:48.897265 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.246.38:443: i/o timeout Sep 12 17:33:49.297988 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #3 Sep 12 17:34:19.298541 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.246.38:443: i/o timeout Sep 12 17:34:20.099629 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #4 Sep 12 17:34:50.100535 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.246.38:443: i/o timeout Sep 12 17:34:51.700836 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #5 Sep 12 17:35:21.704050 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.253.38:443: i/o timeout Sep 12 17:35:24.906503 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #6 Sep 12 17:35:54.912933 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.253.38:443: i/o timeout Sep 12 17:35:59.913753 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #7 Sep 12 17:36:29.914467 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.246.38:443: i/o timeout Sep 12 17:36:34.914779 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #8 Sep 12 17:37:04.915444 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.246.38:443: i/o timeout Sep 12 17:37:09.915909 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #9 Sep 12 17:37:10.046515 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:37:11.172464 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:37:11.190432 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:37:11.592102 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:37:11.987956 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:37:11.987956 ignition[931]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:37:12.027384 ignition[931]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:37:12.027384 ignition[931]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:37:12.027384 ignition[931]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:37:12.027384 ignition[931]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:37:12.027384 ignition[931]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:37:12.027384 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:37:12.027384 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:37:12.027384 ignition[931]: INFO : files: files passed Sep 12 17:37:12.027384 ignition[931]: INFO : Ignition finished successfully Sep 12 17:37:11.993197 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:37:12.013483 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:37:12.045215 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:37:12.094151 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:37:12.262547 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:37:12.262547 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:37:12.094329 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:37:12.318395 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:37:12.108997 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:37:12.120687 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:37:12.150461 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:37:12.223913 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:37:12.224045 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:37:12.242177 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:37:12.262391 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:37:12.279531 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:37:12.285504 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:37:12.354939 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:37:12.382470 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:37:12.429945 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:37:12.451577 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:37:12.475656 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:37:12.495663 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:37:12.495923 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:37:12.530653 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:37:12.549615 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:37:12.567589 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:37:12.585527 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:37:12.605614 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:37:12.627604 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:37:12.647532 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:37:12.666782 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:37:12.686536 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:37:12.706533 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:37:12.725479 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:37:12.725717 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:37:12.757648 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:37:12.777543 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:37:12.799477 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:37:12.799634 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:37:12.819483 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:37:12.819768 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:37:12.951329 ignition[983]: INFO : Ignition 2.19.0 Sep 12 17:37:12.951329 ignition[983]: INFO : Stage: umount Sep 12 17:37:12.951329 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:37:12.951329 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 17:37:12.951329 ignition[983]: INFO : umount: umount passed Sep 12 17:37:12.951329 ignition[983]: INFO : Ignition finished successfully Sep 12 17:37:12.848852 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:37:12.849345 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:37:12.869609 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:37:12.869865 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:37:12.898436 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:37:12.922399 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:37:12.922655 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:37:12.948552 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:37:12.960422 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:37:12.960698 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:37:12.979585 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:37:12.979788 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:37:13.002288 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:37:13.003327 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:37:13.003448 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:37:13.018985 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:37:13.019111 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:37:13.040031 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:37:13.040282 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:37:13.056520 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:37:13.056612 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:37:13.075449 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:37:13.075544 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:37:13.095445 systemd[1]: Stopped target network.target - Network. Sep 12 17:37:13.116345 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:37:13.116474 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:37:13.136446 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:37:13.152375 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:37:13.156297 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:37:13.174344 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:37:13.174495 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:37:13.200417 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:37:13.200517 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:37:13.221428 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:37:13.221530 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:37:13.240398 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:37:13.240513 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:37:13.260458 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:37:13.260566 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:37:13.281451 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:37:13.281557 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:37:13.301757 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:37:13.306261 systemd-networkd[751]: eth0: DHCPv6 lease lost Sep 12 17:37:13.321697 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:37:13.331449 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:37:13.331598 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:37:13.350968 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:37:13.351172 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:37:13.903349 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 12 17:37:13.366172 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:37:13.366323 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:37:13.389119 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:37:13.389225 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:37:13.411312 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:37:13.440481 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:37:13.440700 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:37:13.461479 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:37:13.461590 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:37:13.480481 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:37:13.480587 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:37:13.499452 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:37:13.499566 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:37:13.520652 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:37:13.540940 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:37:13.541116 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:37:13.568728 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:37:13.568900 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:37:13.581433 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:37:13.581585 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:37:13.591372 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:37:13.591478 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:37:13.619345 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:37:13.619496 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:37:13.651376 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:37:13.651599 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:37:13.684492 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:37:13.687488 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:37:13.687577 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:37:13.734531 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:37:13.734756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:37:13.754167 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:37:13.754318 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:37:13.773965 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:37:13.774086 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:37:13.784986 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:37:13.810439 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:37:13.855385 systemd[1]: Switching root. Sep 12 17:37:14.287336 systemd-journald[183]: Journal stopped Sep 12 17:37:16.811152 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:37:16.811234 kernel: SELinux: policy capability open_perms=1 Sep 12 17:37:16.811260 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:37:16.811278 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:37:16.811298 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:37:16.811316 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:37:16.811338 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:37:16.811363 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:37:16.811383 kernel: audit: type=1403 audit(1757698634.528:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:37:16.811411 systemd[1]: Successfully loaded SELinux policy in 91.879ms. Sep 12 17:37:16.811434 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.369ms. Sep 12 17:37:16.811459 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:37:16.811481 systemd[1]: Detected virtualization google. Sep 12 17:37:16.811502 systemd[1]: Detected architecture x86-64. Sep 12 17:37:16.811530 systemd[1]: Detected first boot. Sep 12 17:37:16.811554 systemd[1]: Initializing machine ID from random generator. Sep 12 17:37:16.811576 zram_generator::config[1025]: No configuration found. Sep 12 17:37:16.811601 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:37:16.811623 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:37:16.811649 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:37:16.811672 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:37:16.811696 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:37:16.811718 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:37:16.811740 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:37:16.811764 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:37:16.811787 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:37:16.811815 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:37:16.811839 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:37:16.811860 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:37:16.811883 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:37:16.811908 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:37:16.811930 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:37:16.811952 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:37:16.811973 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:37:16.811998 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:37:16.812020 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:37:16.812042 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:37:16.812062 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:37:16.812084 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:37:16.812105 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:37:16.812133 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:37:16.812171 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:37:16.812190 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:37:16.812223 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:37:16.812244 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:37:16.812265 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:37:16.812287 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:37:16.812307 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:37:16.812327 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:37:16.812347 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:37:16.812373 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:37:16.812396 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:37:16.812416 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:37:16.812438 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:37:16.812460 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:37:16.812486 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:37:16.812508 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:37:16.812529 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:37:16.812551 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:37:16.812572 systemd[1]: Reached target machines.target - Containers. Sep 12 17:37:16.812594 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:37:16.812616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:37:16.812639 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:37:16.812666 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:37:16.812689 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:37:16.812710 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:37:16.812731 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:37:16.812755 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:37:16.812776 kernel: ACPI: bus type drm_connector registered Sep 12 17:37:16.812796 kernel: fuse: init (API version 7.39) Sep 12 17:37:16.812818 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:37:16.812844 kernel: loop: module loaded Sep 12 17:37:16.812868 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:37:16.812892 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:37:16.812914 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:37:16.812937 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:37:16.812959 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:37:16.812982 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:37:16.813014 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:37:16.813038 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:37:16.813070 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:37:16.813159 systemd-journald[1112]: Collecting audit messages is disabled. Sep 12 17:37:16.813221 systemd-journald[1112]: Journal started Sep 12 17:37:16.813273 systemd-journald[1112]: Runtime Journal (/run/log/journal/3a76c4ed405546f8a341239a91b81904) is 8.0M, max 148.7M, 140.7M free. Sep 12 17:37:15.526085 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:37:15.549372 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 17:37:15.549993 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:37:16.828235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:37:16.851017 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:37:16.851181 systemd[1]: Stopped verity-setup.service. Sep 12 17:37:16.876171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:37:16.885190 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:37:16.895782 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:37:16.907680 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:37:16.917596 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:37:16.927587 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:37:16.938584 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:37:16.948626 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:37:16.959760 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:37:16.971733 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:37:16.983850 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:37:16.984118 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:37:16.996865 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:37:16.997106 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:37:17.008847 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:37:17.009106 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:37:17.019798 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:37:17.020044 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:37:17.031768 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:37:17.032068 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:37:17.042826 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:37:17.043083 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:37:17.053788 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:37:17.063776 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:37:17.075777 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:37:17.087811 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:37:17.113354 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:37:17.130540 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:37:17.154473 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:37:17.165469 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:37:17.165830 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:37:17.179123 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:37:17.197783 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:37:17.222514 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:37:17.234820 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:37:17.243831 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:37:17.262352 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:37:17.273436 systemd-journald[1112]: Time spent on flushing to /var/log/journal/3a76c4ed405546f8a341239a91b81904 is 104.510ms for 939 entries. Sep 12 17:37:17.273436 systemd-journald[1112]: System Journal (/var/log/journal/3a76c4ed405546f8a341239a91b81904) is 8.0M, max 584.8M, 576.8M free. Sep 12 17:37:17.411704 systemd-journald[1112]: Received client request to flush runtime journal. Sep 12 17:37:17.286608 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:37:17.295413 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:37:17.306384 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:37:17.316542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:37:17.334572 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:37:17.359944 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:37:17.385721 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:37:17.409205 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:37:17.424911 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:37:17.440262 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:37:17.454883 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:37:17.473204 kernel: loop0: detected capacity change from 0 to 142488 Sep 12 17:37:17.474994 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:37:17.504959 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:37:17.530363 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:37:17.546013 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:37:17.567714 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:37:17.604417 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:37:17.620492 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:37:17.627690 udevadm[1145]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:37:17.645793 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:37:17.649636 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:37:17.674230 kernel: loop1: detected capacity change from 0 to 54824 Sep 12 17:37:17.719729 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Sep 12 17:37:17.719771 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Sep 12 17:37:17.736432 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:37:17.784177 kernel: loop2: detected capacity change from 0 to 140768 Sep 12 17:37:17.922288 kernel: loop3: detected capacity change from 0 to 221472 Sep 12 17:37:18.079199 kernel: loop4: detected capacity change from 0 to 142488 Sep 12 17:37:18.162833 kernel: loop5: detected capacity change from 0 to 54824 Sep 12 17:37:18.221192 kernel: loop6: detected capacity change from 0 to 140768 Sep 12 17:37:18.310221 kernel: loop7: detected capacity change from 0 to 221472 Sep 12 17:37:18.390514 (sd-merge)[1167]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Sep 12 17:37:18.391922 (sd-merge)[1167]: Merged extensions into '/usr'. Sep 12 17:37:18.406401 systemd[1]: Reloading requested from client PID 1143 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:37:18.406424 systemd[1]: Reloading... Sep 12 17:37:18.617197 zram_generator::config[1196]: No configuration found. Sep 12 17:37:18.967228 ldconfig[1138]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:37:18.975989 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:37:19.085088 systemd[1]: Reloading finished in 676 ms. Sep 12 17:37:19.123963 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:37:19.137433 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:37:19.153899 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:37:19.180718 systemd[1]: Starting ensure-sysext.service... Sep 12 17:37:19.196842 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:37:19.220662 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:37:19.240573 systemd[1]: Reloading requested from client PID 1234 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:37:19.240607 systemd[1]: Reloading... Sep 12 17:37:19.263476 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:37:19.264861 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:37:19.269930 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:37:19.274850 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Sep 12 17:37:19.276966 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Sep 12 17:37:19.286888 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:37:19.287888 systemd-tmpfiles[1235]: Skipping /boot Sep 12 17:37:19.304811 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Sep 12 17:37:19.328130 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:37:19.328171 systemd-tmpfiles[1235]: Skipping /boot Sep 12 17:37:19.437175 zram_generator::config[1265]: No configuration found. Sep 12 17:37:19.762171 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 12 17:37:19.798021 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 17:37:19.836177 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:37:19.876659 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:37:19.918275 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Sep 12 17:37:19.918391 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1269) Sep 12 17:37:19.950293 kernel: ACPI: button: Sleep Button [SLPF] Sep 12 17:37:20.009561 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:37:20.033170 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 12 17:37:20.103503 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:37:20.130943 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:37:20.133076 systemd[1]: Reloading finished in 891 ms. Sep 12 17:37:20.171220 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:37:20.195160 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:37:20.231520 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:37:20.256953 systemd[1]: Finished ensure-sysext.service. Sep 12 17:37:20.287369 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 12 17:37:20.304432 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:37:20.310508 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:37:20.332335 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:37:20.344817 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:37:20.353383 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:37:20.377087 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:37:20.399625 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:37:20.419930 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:37:20.424634 lvm[1341]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:37:20.443035 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:37:20.463090 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:37:20.470239 augenrules[1360]: No rules Sep 12 17:37:20.472548 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:37:20.481360 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:37:20.507567 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:37:20.536620 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:37:20.561651 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:37:20.575389 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:37:20.601594 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:37:20.625323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:37:20.638338 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:37:20.645314 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:37:20.659778 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:37:20.676122 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:37:20.676412 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:37:20.677156 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:37:20.677397 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:37:20.677990 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:37:20.678318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:37:20.678880 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:37:20.679116 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:37:20.686879 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:37:20.687943 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:37:20.708945 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:37:20.718710 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:37:20.725830 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:37:20.729577 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Sep 12 17:37:20.729711 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:37:20.729824 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:37:20.740511 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:37:20.748937 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:37:20.751975 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:37:20.753236 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:37:20.762629 lvm[1385]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:37:20.767839 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:37:20.830396 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:37:20.839331 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:37:20.875282 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Sep 12 17:37:20.894339 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:37:20.928991 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:37:21.030276 systemd-networkd[1368]: lo: Link UP Sep 12 17:37:21.030840 systemd-networkd[1368]: lo: Gained carrier Sep 12 17:37:21.033830 systemd-networkd[1368]: Enumeration completed Sep 12 17:37:21.034309 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:37:21.035078 systemd-resolved[1369]: Positive Trust Anchors: Sep 12 17:37:21.035104 systemd-resolved[1369]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:37:21.035187 systemd-resolved[1369]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:37:21.035288 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:37:21.035302 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:37:21.036115 systemd-networkd[1368]: eth0: Link UP Sep 12 17:37:21.036123 systemd-networkd[1368]: eth0: Gained carrier Sep 12 17:37:21.036199 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:37:21.044039 systemd-resolved[1369]: Defaulting to hostname 'linux'. Sep 12 17:37:21.047236 systemd-networkd[1368]: eth0: DHCPv4 address 10.128.0.49/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 12 17:37:21.053701 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:37:21.067833 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:37:21.079983 systemd[1]: Reached target network.target - Network. Sep 12 17:37:21.092539 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:37:21.107458 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:37:21.119751 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:37:21.131617 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:37:21.144719 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:37:21.156990 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:37:21.170435 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:37:21.184468 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:37:21.184534 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:37:21.194603 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:37:21.207606 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:37:21.220736 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:37:21.245438 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:37:21.258379 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:37:21.269702 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:37:21.281504 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:37:21.291513 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:37:21.291744 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:37:21.298620 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:37:21.324383 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:37:21.343891 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:37:21.367371 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:37:21.400454 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:37:21.411899 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:37:21.420523 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:37:21.426528 jq[1419]: false Sep 12 17:37:21.441572 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:37:21.465334 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:37:21.473863 coreos-metadata[1417]: Sep 12 17:37:21.472 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Sep 12 17:37:21.488197 coreos-metadata[1417]: Sep 12 17:37:21.485 INFO Fetch successful Sep 12 17:37:21.488197 coreos-metadata[1417]: Sep 12 17:37:21.486 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Sep 12 17:37:21.486538 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:37:21.489114 coreos-metadata[1417]: Sep 12 17:37:21.488 INFO Fetch successful Sep 12 17:37:21.489114 coreos-metadata[1417]: Sep 12 17:37:21.489 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Sep 12 17:37:21.492569 coreos-metadata[1417]: Sep 12 17:37:21.492 INFO Fetch successful Sep 12 17:37:21.492569 coreos-metadata[1417]: Sep 12 17:37:21.492 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Sep 12 17:37:21.499655 coreos-metadata[1417]: Sep 12 17:37:21.497 INFO Fetch successful Sep 12 17:37:21.508218 extend-filesystems[1421]: Found loop4 Sep 12 17:37:21.508218 extend-filesystems[1421]: Found loop5 Sep 12 17:37:21.508218 extend-filesystems[1421]: Found loop6 Sep 12 17:37:21.508218 extend-filesystems[1421]: Found loop7 Sep 12 17:37:21.508218 extend-filesystems[1421]: Found sda Sep 12 17:37:21.508218 extend-filesystems[1421]: Found sda1 Sep 12 17:37:21.508218 extend-filesystems[1421]: Found sda2 Sep 12 17:37:21.508218 extend-filesystems[1421]: Found sda3 Sep 12 17:37:21.508218 extend-filesystems[1421]: Found usr Sep 12 17:37:21.508218 extend-filesystems[1421]: Found sda4 Sep 12 17:37:21.508218 extend-filesystems[1421]: Found sda6 Sep 12 17:37:21.508218 extend-filesystems[1421]: Found sda7 Sep 12 17:37:21.508218 extend-filesystems[1421]: Found sda9 Sep 12 17:37:21.508218 extend-filesystems[1421]: Checking size of /dev/sda9 Sep 12 17:37:21.710873 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 12 17:37:21.710935 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1286) Sep 12 17:37:21.710968 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 12 17:37:21.507945 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:37:21.711377 extend-filesystems[1421]: Resized partition /dev/sda9 Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:30:39 UTC 2025 (1): Starting Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: ---------------------------------------------------- Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: corporation. Support and training for ntp-4 are Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: available at https://www.nwtime.org/support Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: ---------------------------------------------------- Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: proto: precision = 0.091 usec (-23) Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: basedate set to 2025-08-31 Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: gps base set to 2025-08-31 (week 2382) Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: Listen normally on 3 eth0 10.128.0.49:123 Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: Listen normally on 4 lo [::1]:123 Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: bind(21) AF_INET6 fe80::4001:aff:fe80:31%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:31%2#123 Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: failed to init interface for address fe80::4001:aff:fe80:31%2 Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: Listening on routing socket on fd #21 for interface updates Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:37:21.723948 ntpd[1424]: 12 Sep 17:37:21 ntpd[1424]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:37:21.593936 dbus-daemon[1418]: [system] SELinux support is enabled Sep 12 17:37:21.533431 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:37:21.728356 extend-filesystems[1442]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:37:21.728356 extend-filesystems[1442]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 12 17:37:21.728356 extend-filesystems[1442]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 12 17:37:21.728356 extend-filesystems[1442]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 12 17:37:21.600471 dbus-daemon[1418]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1368 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:37:21.670098 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 12 17:37:21.804194 extend-filesystems[1421]: Resized filesystem in /dev/sda9 Sep 12 17:37:21.640545 ntpd[1424]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:30:39 UTC 2025 (1): Starting Sep 12 17:37:21.673565 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:37:21.640578 ntpd[1424]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:37:21.679340 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:37:21.640594 ntpd[1424]: ---------------------------------------------------- Sep 12 17:37:21.743835 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:37:21.830523 update_engine[1447]: I20250912 17:37:21.788414 1447 main.cc:92] Flatcar Update Engine starting Sep 12 17:37:21.830523 update_engine[1447]: I20250912 17:37:21.790635 1447 update_check_scheduler.cc:74] Next update check in 9m29s Sep 12 17:37:21.640613 ntpd[1424]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:37:21.772487 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:37:21.640627 ntpd[1424]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:37:21.808013 systemd-logind[1436]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:37:21.640641 ntpd[1424]: corporation. Support and training for ntp-4 are Sep 12 17:37:21.808045 systemd-logind[1436]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 12 17:37:21.640655 ntpd[1424]: available at https://www.nwtime.org/support Sep 12 17:37:21.808077 systemd-logind[1436]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:37:21.640670 ntpd[1424]: ---------------------------------------------------- Sep 12 17:37:21.809824 systemd-logind[1436]: New seat seat0. Sep 12 17:37:21.645245 ntpd[1424]: proto: precision = 0.091 usec (-23) Sep 12 17:37:21.824527 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:37:21.647466 ntpd[1424]: basedate set to 2025-08-31 Sep 12 17:37:21.647497 ntpd[1424]: gps base set to 2025-08-31 (week 2382) Sep 12 17:37:21.658112 ntpd[1424]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:37:21.658231 ntpd[1424]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:37:21.658528 ntpd[1424]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:37:21.658608 ntpd[1424]: Listen normally on 3 eth0 10.128.0.49:123 Sep 12 17:37:21.658678 ntpd[1424]: Listen normally on 4 lo [::1]:123 Sep 12 17:37:21.658765 ntpd[1424]: bind(21) AF_INET6 fe80::4001:aff:fe80:31%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:37:21.658798 ntpd[1424]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:31%2#123 Sep 12 17:37:21.658819 ntpd[1424]: failed to init interface for address fe80::4001:aff:fe80:31%2 Sep 12 17:37:21.658877 ntpd[1424]: Listening on routing socket on fd #21 for interface updates Sep 12 17:37:21.660827 ntpd[1424]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:37:21.660874 ntpd[1424]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:37:21.839292 jq[1451]: true Sep 12 17:37:21.854813 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:37:21.855207 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:37:21.855757 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:37:21.856050 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:37:21.870266 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:37:21.870564 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:37:21.885842 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:37:21.886218 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:37:21.952243 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:37:21.976297 dbus-daemon[1418]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:37:21.979852 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:37:22.000873 jq[1455]: true Sep 12 17:37:22.044516 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:37:22.055273 tar[1454]: linux-amd64/helm Sep 12 17:37:22.066374 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:37:22.079483 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:37:22.079794 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:37:22.080034 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:37:22.110636 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:37:22.123398 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:37:22.123696 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:37:22.149569 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:37:22.264843 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:37:22.273262 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:37:22.297602 systemd[1]: Starting sshkeys.service... Sep 12 17:37:22.303018 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:37:22.391017 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:37:22.432067 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:37:22.494023 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:37:22.541747 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:37:22.561805 coreos-metadata[1497]: Sep 12 17:37:22.561 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 12 17:37:22.563743 systemd[1]: Started sshd@0-10.128.0.49:22-139.178.89.65:54198.service - OpenSSH per-connection server daemon (139.178.89.65:54198). Sep 12 17:37:22.569129 coreos-metadata[1497]: Sep 12 17:37:22.569 INFO Fetch failed with 404: resource not found Sep 12 17:37:22.569646 coreos-metadata[1497]: Sep 12 17:37:22.569 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 12 17:37:22.585168 coreos-metadata[1497]: Sep 12 17:37:22.583 INFO Fetch successful Sep 12 17:37:22.585168 coreos-metadata[1497]: Sep 12 17:37:22.583 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 12 17:37:22.589631 coreos-metadata[1497]: Sep 12 17:37:22.585 INFO Fetch failed with 404: resource not found Sep 12 17:37:22.589631 coreos-metadata[1497]: Sep 12 17:37:22.586 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 12 17:37:22.589631 coreos-metadata[1497]: Sep 12 17:37:22.587 INFO Fetch failed with 404: resource not found Sep 12 17:37:22.589631 coreos-metadata[1497]: Sep 12 17:37:22.587 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 12 17:37:22.595526 coreos-metadata[1497]: Sep 12 17:37:22.594 INFO Fetch successful Sep 12 17:37:22.606561 unknown[1497]: wrote ssh authorized keys file for user: core Sep 12 17:37:22.641398 dbus-daemon[1418]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:37:22.641646 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:37:22.642624 ntpd[1424]: bind(24) AF_INET6 fe80::4001:aff:fe80:31%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:37:22.643583 ntpd[1424]: 12 Sep 17:37:22 ntpd[1424]: bind(24) AF_INET6 fe80::4001:aff:fe80:31%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:37:22.643583 ntpd[1424]: 12 Sep 17:37:22 ntpd[1424]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:31%2#123 Sep 12 17:37:22.643583 ntpd[1424]: 12 Sep 17:37:22 ntpd[1424]: failed to init interface for address fe80::4001:aff:fe80:31%2 Sep 12 17:37:22.642677 ntpd[1424]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:31%2#123 Sep 12 17:37:22.642703 ntpd[1424]: failed to init interface for address fe80::4001:aff:fe80:31%2 Sep 12 17:37:22.644547 dbus-daemon[1418]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1480 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:37:22.655460 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:37:22.656135 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:37:22.687510 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:37:22.713757 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:37:22.740095 update-ssh-keys[1516]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:37:22.744097 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:37:22.769680 systemd[1]: Finished sshkeys.service. Sep 12 17:37:22.772819 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:37:22.793860 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:37:22.817770 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:37:22.838436 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:37:22.848430 polkitd[1519]: Started polkitd version 121 Sep 12 17:37:22.852102 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:37:22.870899 polkitd[1519]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:37:22.871212 polkitd[1519]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:37:22.875754 polkitd[1519]: Finished loading, compiling and executing 2 rules Sep 12 17:37:22.881011 dbus-daemon[1418]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:37:22.882051 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:37:22.881499 polkitd[1519]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:37:22.925246 systemd-hostnamed[1480]: Hostname set to (transient) Sep 12 17:37:22.926281 systemd-resolved[1369]: System hostname changed to 'ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal'. Sep 12 17:37:22.937501 containerd[1456]: time="2025-09-12T17:37:22.936083230Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:37:23.000450 containerd[1456]: time="2025-09-12T17:37:22.999206091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:37:23.004172 containerd[1456]: time="2025-09-12T17:37:23.002705309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:37:23.004172 containerd[1456]: time="2025-09-12T17:37:23.002789572Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:37:23.004172 containerd[1456]: time="2025-09-12T17:37:23.002822097Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:37:23.004172 containerd[1456]: time="2025-09-12T17:37:23.003076418Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:37:23.004172 containerd[1456]: time="2025-09-12T17:37:23.003119174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:37:23.004172 containerd[1456]: time="2025-09-12T17:37:23.003271165Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:37:23.004172 containerd[1456]: time="2025-09-12T17:37:23.003297419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:37:23.004172 containerd[1456]: time="2025-09-12T17:37:23.003585311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:37:23.004172 containerd[1456]: time="2025-09-12T17:37:23.003611511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:37:23.004172 containerd[1456]: time="2025-09-12T17:37:23.003634358Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:37:23.004172 containerd[1456]: time="2025-09-12T17:37:23.003651928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:37:23.004750 containerd[1456]: time="2025-09-12T17:37:23.003766160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:37:23.004750 containerd[1456]: time="2025-09-12T17:37:23.004087468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:37:23.004750 containerd[1456]: time="2025-09-12T17:37:23.004325855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:37:23.004750 containerd[1456]: time="2025-09-12T17:37:23.004353239Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:37:23.004750 containerd[1456]: time="2025-09-12T17:37:23.004485046Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:37:23.004750 containerd[1456]: time="2025-09-12T17:37:23.004558603Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:37:23.018388 containerd[1456]: time="2025-09-12T17:37:23.017353154Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:37:23.018388 containerd[1456]: time="2025-09-12T17:37:23.017585513Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:37:23.018388 containerd[1456]: time="2025-09-12T17:37:23.017815883Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:37:23.018388 containerd[1456]: time="2025-09-12T17:37:23.017856486Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:37:23.018388 containerd[1456]: time="2025-09-12T17:37:23.017890427Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:37:23.018388 containerd[1456]: time="2025-09-12T17:37:23.018214673Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:37:23.020780 containerd[1456]: time="2025-09-12T17:37:23.019107174Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:37:23.020780 containerd[1456]: time="2025-09-12T17:37:23.019509327Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:37:23.020780 containerd[1456]: time="2025-09-12T17:37:23.019556269Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:37:23.020780 containerd[1456]: time="2025-09-12T17:37:23.019582255Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:37:23.020780 containerd[1456]: time="2025-09-12T17:37:23.019609414Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:37:23.020780 containerd[1456]: time="2025-09-12T17:37:23.019635616Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:37:23.020780 containerd[1456]: time="2025-09-12T17:37:23.019659687Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:37:23.020780 containerd[1456]: time="2025-09-12T17:37:23.019690860Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:37:23.020780 containerd[1456]: time="2025-09-12T17:37:23.019720825Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:37:23.020780 containerd[1456]: time="2025-09-12T17:37:23.019745221Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:37:23.020780 containerd[1456]: time="2025-09-12T17:37:23.019779779Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:37:23.020780 containerd[1456]: time="2025-09-12T17:37:23.019804052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:37:23.020780 containerd[1456]: time="2025-09-12T17:37:23.019841270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.020780 containerd[1456]: time="2025-09-12T17:37:23.019864756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.023536 containerd[1456]: time="2025-09-12T17:37:23.019887424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.023536 containerd[1456]: time="2025-09-12T17:37:23.021688697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.023536 containerd[1456]: time="2025-09-12T17:37:23.021758189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.023536 containerd[1456]: time="2025-09-12T17:37:23.021795292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.023536 containerd[1456]: time="2025-09-12T17:37:23.021819542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.023536 containerd[1456]: time="2025-09-12T17:37:23.021854834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.023536 containerd[1456]: time="2025-09-12T17:37:23.021887940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.023536 containerd[1456]: time="2025-09-12T17:37:23.021924808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.023536 containerd[1456]: time="2025-09-12T17:37:23.021952076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.023536 containerd[1456]: time="2025-09-12T17:37:23.021979211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.023536 containerd[1456]: time="2025-09-12T17:37:23.022010988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.023536 containerd[1456]: time="2025-09-12T17:37:23.022046901Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:37:23.023536 containerd[1456]: time="2025-09-12T17:37:23.022105913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.023536 containerd[1456]: time="2025-09-12T17:37:23.022131116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.023536 containerd[1456]: time="2025-09-12T17:37:23.022177250Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:37:23.026041 containerd[1456]: time="2025-09-12T17:37:23.022263995Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:37:23.026041 containerd[1456]: time="2025-09-12T17:37:23.022314853Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:37:23.026041 containerd[1456]: time="2025-09-12T17:37:23.022344692Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:37:23.026041 containerd[1456]: time="2025-09-12T17:37:23.022368734Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:37:23.026041 containerd[1456]: time="2025-09-12T17:37:23.022392555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.026041 containerd[1456]: time="2025-09-12T17:37:23.022420716Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:37:23.026041 containerd[1456]: time="2025-09-12T17:37:23.022444492Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:37:23.026041 containerd[1456]: time="2025-09-12T17:37:23.022463727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:37:23.026873 containerd[1456]: time="2025-09-12T17:37:23.023375770Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:37:23.026873 containerd[1456]: time="2025-09-12T17:37:23.023488539Z" level=info msg="Connect containerd service" Sep 12 17:37:23.026873 containerd[1456]: time="2025-09-12T17:37:23.023572709Z" level=info msg="using legacy CRI server" Sep 12 17:37:23.026873 containerd[1456]: time="2025-09-12T17:37:23.023588385Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:37:23.026873 containerd[1456]: time="2025-09-12T17:37:23.023884898Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:37:23.026873 containerd[1456]: time="2025-09-12T17:37:23.024956975Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:37:23.026873 containerd[1456]: time="2025-09-12T17:37:23.025816186Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:37:23.026873 containerd[1456]: time="2025-09-12T17:37:23.025900128Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:37:23.026873 containerd[1456]: time="2025-09-12T17:37:23.025930508Z" level=info msg="Start subscribing containerd event" Sep 12 17:37:23.026873 containerd[1456]: time="2025-09-12T17:37:23.025995868Z" level=info msg="Start recovering state" Sep 12 17:37:23.026873 containerd[1456]: time="2025-09-12T17:37:23.026083789Z" level=info msg="Start event monitor" Sep 12 17:37:23.026873 containerd[1456]: time="2025-09-12T17:37:23.026108806Z" level=info msg="Start snapshots syncer" Sep 12 17:37:23.026873 containerd[1456]: time="2025-09-12T17:37:23.026125965Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:37:23.026873 containerd[1456]: time="2025-09-12T17:37:23.026171259Z" level=info msg="Start streaming server" Sep 12 17:37:23.026873 containerd[1456]: time="2025-09-12T17:37:23.026268747Z" level=info msg="containerd successfully booted in 0.092989s" Sep 12 17:37:23.027195 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:37:23.059315 systemd-networkd[1368]: eth0: Gained IPv6LL Sep 12 17:37:23.064369 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:37:23.079490 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:37:23.102512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:23.123336 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:37:23.148991 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Sep 12 17:37:23.166371 sshd[1508]: Accepted publickey for core from 139.178.89.65 port 54198 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:37:23.184893 sshd[1508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:23.202988 init.sh[1541]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 12 17:37:23.202988 init.sh[1541]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 12 17:37:23.202988 init.sh[1541]: + /usr/bin/google_instance_setup Sep 12 17:37:23.228456 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:37:23.250439 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:37:23.263348 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:37:23.293778 systemd-logind[1436]: New session 1 of user core. Sep 12 17:37:23.326779 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:37:23.361412 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:37:23.402348 (systemd)[1553]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:37:23.518495 tar[1454]: linux-amd64/LICENSE Sep 12 17:37:23.518495 tar[1454]: linux-amd64/README.md Sep 12 17:37:23.554078 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:37:23.698044 systemd[1553]: Queued start job for default target default.target. Sep 12 17:37:23.706325 systemd[1553]: Created slice app.slice - User Application Slice. Sep 12 17:37:23.706375 systemd[1553]: Reached target paths.target - Paths. Sep 12 17:37:23.706402 systemd[1553]: Reached target timers.target - Timers. Sep 12 17:37:23.710371 systemd[1553]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:37:23.744562 systemd[1553]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:37:23.744694 systemd[1553]: Reached target sockets.target - Sockets. Sep 12 17:37:23.744718 systemd[1553]: Reached target basic.target - Basic System. Sep 12 17:37:23.744799 systemd[1553]: Reached target default.target - Main User Target. Sep 12 17:37:23.744856 systemd[1553]: Startup finished in 318ms. Sep 12 17:37:23.745062 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:37:23.765753 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:37:24.083220 systemd[1]: Started sshd@1-10.128.0.49:22-139.178.89.65:48192.service - OpenSSH per-connection server daemon (139.178.89.65:48192). Sep 12 17:37:24.202600 instance-setup[1547]: INFO Running google_set_multiqueue. Sep 12 17:37:24.226987 instance-setup[1547]: INFO Set channels for eth0 to 2. Sep 12 17:37:24.233461 instance-setup[1547]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Sep 12 17:37:24.236776 instance-setup[1547]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Sep 12 17:37:24.236910 instance-setup[1547]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Sep 12 17:37:24.238104 instance-setup[1547]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Sep 12 17:37:24.238762 instance-setup[1547]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Sep 12 17:37:24.242406 instance-setup[1547]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Sep 12 17:37:24.242486 instance-setup[1547]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Sep 12 17:37:24.244784 instance-setup[1547]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Sep 12 17:37:24.263874 instance-setup[1547]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 12 17:37:24.270007 instance-setup[1547]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 12 17:37:24.272420 instance-setup[1547]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 12 17:37:24.273214 instance-setup[1547]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 12 17:37:24.311109 init.sh[1541]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 12 17:37:24.514364 startup-script[1599]: INFO Starting startup scripts. Sep 12 17:37:24.514889 sshd[1569]: Accepted publickey for core from 139.178.89.65 port 48192 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:37:24.517418 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:24.527321 systemd-logind[1436]: New session 2 of user core. Sep 12 17:37:24.530360 startup-script[1599]: INFO No startup scripts found in metadata. Sep 12 17:37:24.530467 startup-script[1599]: INFO Finished running startup scripts. Sep 12 17:37:24.532422 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:37:24.569325 init.sh[1541]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 12 17:37:24.569325 init.sh[1541]: + daemon_pids=() Sep 12 17:37:24.569621 init.sh[1541]: + for d in accounts clock_skew network Sep 12 17:37:24.571416 init.sh[1541]: + daemon_pids+=($!) Sep 12 17:37:24.571416 init.sh[1541]: + for d in accounts clock_skew network Sep 12 17:37:24.571886 init.sh[1603]: + /usr/bin/google_accounts_daemon Sep 12 17:37:24.572400 init.sh[1604]: + /usr/bin/google_clock_skew_daemon Sep 12 17:37:24.572682 init.sh[1541]: + daemon_pids+=($!) Sep 12 17:37:24.572682 init.sh[1541]: + for d in accounts clock_skew network Sep 12 17:37:24.572682 init.sh[1541]: + daemon_pids+=($!) Sep 12 17:37:24.572682 init.sh[1541]: + NOTIFY_SOCKET=/run/systemd/notify Sep 12 17:37:24.572682 init.sh[1541]: + /usr/bin/systemd-notify --ready Sep 12 17:37:24.574270 init.sh[1605]: + /usr/bin/google_network_daemon Sep 12 17:37:24.605562 systemd[1]: Started oem-gce.service - GCE Linux Agent. Sep 12 17:37:24.619439 init.sh[1541]: + wait -n 1603 1604 1605 Sep 12 17:37:24.803764 sshd[1569]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:24.816310 systemd[1]: sshd@1-10.128.0.49:22-139.178.89.65:48192.service: Deactivated successfully. Sep 12 17:37:24.822947 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:37:24.826876 systemd-logind[1436]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:37:24.829775 systemd-logind[1436]: Removed session 2. Sep 12 17:37:24.889710 systemd[1]: Started sshd@2-10.128.0.49:22-139.178.89.65:48194.service - OpenSSH per-connection server daemon (139.178.89.65:48194). Sep 12 17:37:25.013275 google-clock-skew[1604]: INFO Starting Google Clock Skew daemon. Sep 12 17:37:25.032489 google-clock-skew[1604]: INFO Clock drift token has changed: 0. Sep 12 17:37:25.106436 google-networking[1605]: INFO Starting Google Networking daemon. Sep 12 17:37:25.110288 groupadd[1620]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 12 17:37:25.117043 groupadd[1620]: group added to /etc/gshadow: name=google-sudoers Sep 12 17:37:25.203896 groupadd[1620]: new group: name=google-sudoers, GID=1000 Sep 12 17:37:25.243417 google-accounts[1603]: INFO Starting Google Accounts daemon. Sep 12 17:37:25.262222 google-accounts[1603]: WARNING OS Login not installed. Sep 12 17:37:25.264490 google-accounts[1603]: INFO Creating a new user account for 0. Sep 12 17:37:25.273378 init.sh[1629]: useradd: invalid user name '0': use --badname to ignore Sep 12 17:37:25.274176 google-accounts[1603]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 12 17:37:25.324380 sshd[1615]: Accepted publickey for core from 139.178.89.65 port 48194 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:37:25.325993 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:25.333333 systemd-logind[1436]: New session 3 of user core. Sep 12 17:37:25.339440 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:37:25.605193 sshd[1615]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:25.611107 systemd[1]: sshd@2-10.128.0.49:22-139.178.89.65:48194.service: Deactivated successfully. Sep 12 17:37:25.615605 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:37:25.617464 systemd-logind[1436]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:37:25.619925 systemd-logind[1436]: Removed session 3. Sep 12 17:37:25.641274 ntpd[1424]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:31%2]:123 Sep 12 17:37:25.641921 ntpd[1424]: 12 Sep 17:37:25 ntpd[1424]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:31%2]:123 Sep 12 17:37:26.000598 systemd-resolved[1369]: Clock change detected. Flushing caches. Sep 12 17:37:26.001699 google-clock-skew[1604]: INFO Synced system time with hardware clock. Sep 12 17:37:26.073823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:26.087103 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:37:26.094722 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:37:26.099899 systemd[1]: Startup finished in 1.120s (kernel) + 4min 31.733s (initrd) + 11.438s (userspace) = 4min 44.293s. Sep 12 17:37:27.416826 kubelet[1640]: E0912 17:37:27.416732 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:37:27.419759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:37:27.420029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:37:27.421139 systemd[1]: kubelet.service: Consumed 1.543s CPU time. Sep 12 17:37:35.885991 systemd[1]: Started sshd@3-10.128.0.49:22-139.178.89.65:48308.service - OpenSSH per-connection server daemon (139.178.89.65:48308). Sep 12 17:37:36.252613 sshd[1652]: Accepted publickey for core from 139.178.89.65 port 48308 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:37:36.254533 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:36.260932 systemd-logind[1436]: New session 4 of user core. Sep 12 17:37:36.267847 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:37:36.517607 sshd[1652]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:36.523651 systemd[1]: sshd@3-10.128.0.49:22-139.178.89.65:48308.service: Deactivated successfully. Sep 12 17:37:36.525946 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:37:36.526932 systemd-logind[1436]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:37:36.528367 systemd-logind[1436]: Removed session 4. Sep 12 17:37:36.584910 systemd[1]: Started sshd@4-10.128.0.49:22-139.178.89.65:48318.service - OpenSSH per-connection server daemon (139.178.89.65:48318). Sep 12 17:37:36.950479 sshd[1659]: Accepted publickey for core from 139.178.89.65 port 48318 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:37:36.952357 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:36.958773 systemd-logind[1436]: New session 5 of user core. Sep 12 17:37:36.965772 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:37:37.208044 sshd[1659]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:37.214141 systemd[1]: sshd@4-10.128.0.49:22-139.178.89.65:48318.service: Deactivated successfully. Sep 12 17:37:37.216452 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:37:37.217424 systemd-logind[1436]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:37:37.218863 systemd-logind[1436]: Removed session 5. Sep 12 17:37:37.277811 systemd[1]: Started sshd@5-10.128.0.49:22-139.178.89.65:48322.service - OpenSSH per-connection server daemon (139.178.89.65:48322). Sep 12 17:37:37.431364 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:37:37.438112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:37.657980 sshd[1666]: Accepted publickey for core from 139.178.89.65 port 48322 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:37:37.660050 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:37.669801 systemd-logind[1436]: New session 6 of user core. Sep 12 17:37:37.678817 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:37:37.786905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:37.793911 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:37:37.853207 kubelet[1677]: E0912 17:37:37.853105 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:37:37.858110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:37:37.858365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:37:37.932266 sshd[1666]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:37.936710 systemd[1]: sshd@5-10.128.0.49:22-139.178.89.65:48322.service: Deactivated successfully. Sep 12 17:37:37.939007 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:37:37.940768 systemd-logind[1436]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:37:37.942318 systemd-logind[1436]: Removed session 6. Sep 12 17:37:38.005041 systemd[1]: Started sshd@6-10.128.0.49:22-139.178.89.65:48328.service - OpenSSH per-connection server daemon (139.178.89.65:48328). Sep 12 17:37:38.382366 sshd[1688]: Accepted publickey for core from 139.178.89.65 port 48328 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:37:38.385680 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:38.392382 systemd-logind[1436]: New session 7 of user core. Sep 12 17:37:38.401868 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:37:38.625620 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:37:38.626131 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:37:38.641790 sudo[1691]: pam_unix(sudo:session): session closed for user root Sep 12 17:37:38.700569 sshd[1688]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:38.705261 systemd[1]: sshd@6-10.128.0.49:22-139.178.89.65:48328.service: Deactivated successfully. Sep 12 17:37:38.707844 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:37:38.709753 systemd-logind[1436]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:37:38.711314 systemd-logind[1436]: Removed session 7. Sep 12 17:37:38.776935 systemd[1]: Started sshd@7-10.128.0.49:22-139.178.89.65:48338.service - OpenSSH per-connection server daemon (139.178.89.65:48338). Sep 12 17:37:39.150972 sshd[1696]: Accepted publickey for core from 139.178.89.65 port 48338 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:37:39.152983 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:39.158581 systemd-logind[1436]: New session 8 of user core. Sep 12 17:37:39.170817 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:37:39.375542 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:37:39.376075 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:37:39.381220 sudo[1700]: pam_unix(sudo:session): session closed for user root Sep 12 17:37:39.396147 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:37:39.396688 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:37:39.416973 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:37:39.420885 auditctl[1703]: No rules Sep 12 17:37:39.421601 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:37:39.421896 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:37:39.429262 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:37:39.464271 augenrules[1721]: No rules Sep 12 17:37:39.465205 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:37:39.467129 sudo[1699]: pam_unix(sudo:session): session closed for user root Sep 12 17:37:39.526136 sshd[1696]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:39.530740 systemd[1]: sshd@7-10.128.0.49:22-139.178.89.65:48338.service: Deactivated successfully. Sep 12 17:37:39.533137 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:37:39.535048 systemd-logind[1436]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:37:39.536821 systemd-logind[1436]: Removed session 8. Sep 12 17:37:39.600961 systemd[1]: Started sshd@8-10.128.0.49:22-139.178.89.65:48340.service - OpenSSH per-connection server daemon (139.178.89.65:48340). Sep 12 17:37:39.976399 sshd[1729]: Accepted publickey for core from 139.178.89.65 port 48340 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:37:39.978283 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:39.985521 systemd-logind[1436]: New session 9 of user core. Sep 12 17:37:39.990820 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:37:40.200447 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:37:40.200966 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:37:40.687964 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:37:40.692173 (dockerd)[1747]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:37:41.178444 dockerd[1747]: time="2025-09-12T17:37:41.178261435Z" level=info msg="Starting up" Sep 12 17:37:41.305270 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2805012431-merged.mount: Deactivated successfully. Sep 12 17:37:41.368994 dockerd[1747]: time="2025-09-12T17:37:41.368927473Z" level=info msg="Loading containers: start." Sep 12 17:37:41.550661 kernel: Initializing XFRM netlink socket Sep 12 17:37:41.680256 systemd-networkd[1368]: docker0: Link UP Sep 12 17:37:41.706854 dockerd[1747]: time="2025-09-12T17:37:41.706796389Z" level=info msg="Loading containers: done." Sep 12 17:37:41.735263 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2293400610-merged.mount: Deactivated successfully. Sep 12 17:37:41.738107 dockerd[1747]: time="2025-09-12T17:37:41.738032443Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:37:41.738260 dockerd[1747]: time="2025-09-12T17:37:41.738183624Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:37:41.738418 dockerd[1747]: time="2025-09-12T17:37:41.738378531Z" level=info msg="Daemon has completed initialization" Sep 12 17:37:41.783958 dockerd[1747]: time="2025-09-12T17:37:41.783717392Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:37:41.784087 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:37:42.829520 containerd[1456]: time="2025-09-12T17:37:42.829422914Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:37:43.425544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611326396.mount: Deactivated successfully. Sep 12 17:37:45.262054 containerd[1456]: time="2025-09-12T17:37:45.261979057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:45.263787 containerd[1456]: time="2025-09-12T17:37:45.263716649Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28124707" Sep 12 17:37:45.265622 containerd[1456]: time="2025-09-12T17:37:45.265497709Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:45.271537 containerd[1456]: time="2025-09-12T17:37:45.269773949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:45.271816 containerd[1456]: time="2025-09-12T17:37:45.271765651Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.442283652s" Sep 12 17:37:45.271952 containerd[1456]: time="2025-09-12T17:37:45.271926996Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:37:45.272746 containerd[1456]: time="2025-09-12T17:37:45.272682780Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:37:46.941937 containerd[1456]: time="2025-09-12T17:37:46.941857450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:46.943699 containerd[1456]: time="2025-09-12T17:37:46.943531630Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24718566" Sep 12 17:37:46.944986 containerd[1456]: time="2025-09-12T17:37:46.944918182Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:46.950221 containerd[1456]: time="2025-09-12T17:37:46.949601016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:46.951776 containerd[1456]: time="2025-09-12T17:37:46.951585741Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.678855319s" Sep 12 17:37:46.951776 containerd[1456]: time="2025-09-12T17:37:46.951642540Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:37:46.954842 containerd[1456]: time="2025-09-12T17:37:46.954444019Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:37:47.931776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:37:47.940997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:48.256469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:48.271130 (kubelet)[1957]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:37:48.373294 kubelet[1957]: E0912 17:37:48.373112 1957 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:37:48.377627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:37:48.378270 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:37:48.570717 containerd[1456]: time="2025-09-12T17:37:48.570543551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:48.572972 containerd[1456]: time="2025-09-12T17:37:48.572678881Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18789614" Sep 12 17:37:48.574299 containerd[1456]: time="2025-09-12T17:37:48.574219566Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:48.578543 containerd[1456]: time="2025-09-12T17:37:48.578454567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:48.579913 containerd[1456]: time="2025-09-12T17:37:48.579864829Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.625371348s" Sep 12 17:37:48.580172 containerd[1456]: time="2025-09-12T17:37:48.580042486Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:37:48.580807 containerd[1456]: time="2025-09-12T17:37:48.580764592Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:37:51.158217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount733152367.mount: Deactivated successfully. Sep 12 17:37:51.857999 containerd[1456]: time="2025-09-12T17:37:51.857886089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:51.859538 containerd[1456]: time="2025-09-12T17:37:51.859396199Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30412147" Sep 12 17:37:51.861070 containerd[1456]: time="2025-09-12T17:37:51.860998658Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:51.864128 containerd[1456]: time="2025-09-12T17:37:51.864058245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:51.864989 containerd[1456]: time="2025-09-12T17:37:51.864939997Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 3.284006099s" Sep 12 17:37:51.865093 containerd[1456]: time="2025-09-12T17:37:51.864996108Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:37:51.866227 containerd[1456]: time="2025-09-12T17:37:51.865566135Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:37:52.430276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141641243.mount: Deactivated successfully. Sep 12 17:37:53.167863 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:37:53.801651 containerd[1456]: time="2025-09-12T17:37:53.801578527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:53.803453 containerd[1456]: time="2025-09-12T17:37:53.803383258Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Sep 12 17:37:53.804763 containerd[1456]: time="2025-09-12T17:37:53.804687093Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:53.808903 containerd[1456]: time="2025-09-12T17:37:53.808826456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:53.810557 containerd[1456]: time="2025-09-12T17:37:53.810353095Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.94474863s" Sep 12 17:37:53.810557 containerd[1456]: time="2025-09-12T17:37:53.810406097Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:37:53.811045 containerd[1456]: time="2025-09-12T17:37:53.810999264Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:37:54.293646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount873929714.mount: Deactivated successfully. Sep 12 17:37:54.303204 containerd[1456]: time="2025-09-12T17:37:54.303110034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:54.304647 containerd[1456]: time="2025-09-12T17:37:54.304432733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Sep 12 17:37:54.306545 containerd[1456]: time="2025-09-12T17:37:54.306202201Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:54.309765 containerd[1456]: time="2025-09-12T17:37:54.309599488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:54.311528 containerd[1456]: time="2025-09-12T17:37:54.310725531Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 499.684709ms" Sep 12 17:37:54.311528 containerd[1456]: time="2025-09-12T17:37:54.310775201Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:37:54.311528 containerd[1456]: time="2025-09-12T17:37:54.311378567Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:37:54.839094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1336981988.mount: Deactivated successfully. Sep 12 17:37:57.382267 containerd[1456]: time="2025-09-12T17:37:57.382181340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:57.384036 containerd[1456]: time="2025-09-12T17:37:57.383959963Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56918218" Sep 12 17:37:57.386133 containerd[1456]: time="2025-09-12T17:37:57.386054996Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:57.391458 containerd[1456]: time="2025-09-12T17:37:57.390759802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:57.392614 containerd[1456]: time="2025-09-12T17:37:57.392552258Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.081136956s" Sep 12 17:37:57.392744 containerd[1456]: time="2025-09-12T17:37:57.392614737Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:37:58.431493 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:37:58.441672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:58.754803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:58.757915 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:37:58.835670 kubelet[2117]: E0912 17:37:58.835612 2117 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:37:58.838713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:37:58.838968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:38:00.721736 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:38:00.729990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:38:00.773127 systemd[1]: Reloading requested from client PID 2131 ('systemctl') (unit session-9.scope)... Sep 12 17:38:00.773161 systemd[1]: Reloading... Sep 12 17:38:00.947542 zram_generator::config[2171]: No configuration found. Sep 12 17:38:01.113835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:38:01.218452 systemd[1]: Reloading finished in 444 ms. Sep 12 17:38:01.279442 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:38:01.279605 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:38:01.279951 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:38:01.287002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:38:01.592781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:38:01.598764 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:38:01.662702 kubelet[2222]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:38:01.662702 kubelet[2222]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:38:01.662702 kubelet[2222]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:38:01.663292 kubelet[2222]: I0912 17:38:01.662798 2222 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:38:02.665057 kubelet[2222]: I0912 17:38:02.664991 2222 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:38:02.665057 kubelet[2222]: I0912 17:38:02.665045 2222 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:38:02.665763 kubelet[2222]: I0912 17:38:02.665496 2222 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:38:02.713263 kubelet[2222]: E0912 17:38:02.713188 2222 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:38:02.714741 kubelet[2222]: I0912 17:38:02.714481 2222 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:38:02.724799 kubelet[2222]: E0912 17:38:02.724723 2222 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:38:02.724799 kubelet[2222]: I0912 17:38:02.724773 2222 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:38:02.731541 kubelet[2222]: I0912 17:38:02.731257 2222 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:38:02.731541 kubelet[2222]: I0912 17:38:02.731419 2222 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:38:02.731774 kubelet[2222]: I0912 17:38:02.731644 2222 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:38:02.731964 kubelet[2222]: I0912 17:38:02.731686 2222 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:38:02.732149 kubelet[2222]: I0912 17:38:02.731974 2222 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:38:02.732149 kubelet[2222]: I0912 17:38:02.731993 2222 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:38:02.732248 kubelet[2222]: I0912 17:38:02.732165 2222 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:38:02.737956 kubelet[2222]: I0912 17:38:02.737886 2222 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:38:02.737956 kubelet[2222]: I0912 17:38:02.737943 2222 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:38:02.738169 kubelet[2222]: I0912 17:38:02.737996 2222 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:38:02.738169 kubelet[2222]: I0912 17:38:02.738039 2222 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:38:02.742000 kubelet[2222]: W0912 17:38:02.741338 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 12 17:38:02.742000 kubelet[2222]: E0912 17:38:02.741452 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:38:02.746541 kubelet[2222]: I0912 17:38:02.745540 2222 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:38:02.747638 kubelet[2222]: I0912 17:38:02.747611 2222 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:38:02.747917 kubelet[2222]: W0912 17:38:02.747898 2222 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:38:02.751052 kubelet[2222]: W0912 17:38:02.750981 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 12 17:38:02.751181 kubelet[2222]: E0912 17:38:02.751073 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:38:02.754913 kubelet[2222]: I0912 17:38:02.754881 2222 server.go:1274] "Started kubelet" Sep 12 17:38:02.757719 kubelet[2222]: I0912 17:38:02.757652 2222 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:38:02.758931 kubelet[2222]: I0912 17:38:02.758881 2222 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:38:02.766413 kubelet[2222]: I0912 17:38:02.764896 2222 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:38:02.766413 kubelet[2222]: I0912 17:38:02.765913 2222 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:38:02.766413 kubelet[2222]: I0912 17:38:02.766238 2222 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:38:02.773481 kubelet[2222]: E0912 17:38:02.770849 2222 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal.186499a3a7c62741 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal,UID:ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal,},FirstTimestamp:2025-09-12 17:38:02.754836289 +0000 UTC m=+1.149767167,LastTimestamp:2025-09-12 17:38:02.754836289 +0000 UTC m=+1.149767167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal,}" Sep 12 17:38:02.774357 kubelet[2222]: I0912 17:38:02.773891 2222 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:38:02.774631 kubelet[2222]: I0912 17:38:02.774611 2222 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:38:02.775102 kubelet[2222]: E0912 17:38:02.775074 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" not found" Sep 12 17:38:02.776951 kubelet[2222]: E0912 17:38:02.776908 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="200ms" Sep 12 17:38:02.778003 kubelet[2222]: I0912 17:38:02.777320 2222 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:38:02.778003 kubelet[2222]: I0912 17:38:02.777376 2222 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:38:02.778003 kubelet[2222]: W0912 17:38:02.777871 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 12 17:38:02.778003 kubelet[2222]: E0912 17:38:02.777938 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:38:02.779886 kubelet[2222]: E0912 17:38:02.779853 2222 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:38:02.781790 kubelet[2222]: I0912 17:38:02.781765 2222 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:38:02.781942 kubelet[2222]: I0912 17:38:02.781926 2222 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:38:02.782180 kubelet[2222]: I0912 17:38:02.782150 2222 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:38:02.800968 kubelet[2222]: I0912 17:38:02.800887 2222 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:38:02.802658 kubelet[2222]: I0912 17:38:02.802493 2222 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:38:02.802658 kubelet[2222]: I0912 17:38:02.802574 2222 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:38:02.802658 kubelet[2222]: I0912 17:38:02.802624 2222 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:38:02.802934 kubelet[2222]: E0912 17:38:02.802696 2222 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:38:02.815255 kubelet[2222]: W0912 17:38:02.815178 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 12 17:38:02.815435 kubelet[2222]: E0912 17:38:02.815269 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:38:02.826187 kubelet[2222]: I0912 17:38:02.825843 2222 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:38:02.826187 kubelet[2222]: I0912 17:38:02.825861 2222 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:38:02.826187 kubelet[2222]: I0912 17:38:02.825885 2222 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:38:02.829151 kubelet[2222]: I0912 17:38:02.828681 2222 policy_none.go:49] "None policy: Start" Sep 12 17:38:02.830330 kubelet[2222]: I0912 17:38:02.829916 2222 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:38:02.830330 kubelet[2222]: I0912 17:38:02.829950 2222 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:38:02.838963 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:38:02.853447 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:38:02.858643 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:38:02.871703 kubelet[2222]: I0912 17:38:02.870845 2222 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:38:02.871703 kubelet[2222]: I0912 17:38:02.871139 2222 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:38:02.871703 kubelet[2222]: I0912 17:38:02.871157 2222 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:38:02.871703 kubelet[2222]: I0912 17:38:02.871497 2222 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:38:02.874068 kubelet[2222]: E0912 17:38:02.874028 2222 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" not found" Sep 12 17:38:02.928701 systemd[1]: Created slice kubepods-burstable-podb312c41e3294e6d7eae972350709b229.slice - libcontainer container kubepods-burstable-podb312c41e3294e6d7eae972350709b229.slice. Sep 12 17:38:02.947080 systemd[1]: Created slice kubepods-burstable-pod9205a42ad96fd6174f0b5dfcde644508.slice - libcontainer container kubepods-burstable-pod9205a42ad96fd6174f0b5dfcde644508.slice. Sep 12 17:38:02.959325 systemd[1]: Created slice kubepods-burstable-podf0f625f93a8a746f1119115babc90803.slice - libcontainer container kubepods-burstable-podf0f625f93a8a746f1119115babc90803.slice. Sep 12 17:38:02.977734 kubelet[2222]: E0912 17:38:02.977673 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="400ms" Sep 12 17:38:02.977965 kubelet[2222]: I0912 17:38:02.977889 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9205a42ad96fd6174f0b5dfcde644508-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"9205a42ad96fd6174f0b5dfcde644508\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:02.977965 kubelet[2222]: I0912 17:38:02.977938 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9205a42ad96fd6174f0b5dfcde644508-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"9205a42ad96fd6174f0b5dfcde644508\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:02.978108 kubelet[2222]: I0912 17:38:02.977973 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9205a42ad96fd6174f0b5dfcde644508-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"9205a42ad96fd6174f0b5dfcde644508\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:02.978108 kubelet[2222]: I0912 17:38:02.978009 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9205a42ad96fd6174f0b5dfcde644508-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"9205a42ad96fd6174f0b5dfcde644508\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:02.978108 kubelet[2222]: I0912 17:38:02.978075 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b312c41e3294e6d7eae972350709b229-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"b312c41e3294e6d7eae972350709b229\") " pod="kube-system/kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:02.978258 kubelet[2222]: I0912 17:38:02.978106 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b312c41e3294e6d7eae972350709b229-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"b312c41e3294e6d7eae972350709b229\") " pod="kube-system/kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:02.978258 kubelet[2222]: I0912 17:38:02.978135 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b312c41e3294e6d7eae972350709b229-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"b312c41e3294e6d7eae972350709b229\") " pod="kube-system/kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:02.978258 kubelet[2222]: I0912 17:38:02.978165 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9205a42ad96fd6174f0b5dfcde644508-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"9205a42ad96fd6174f0b5dfcde644508\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:02.979396 kubelet[2222]: I0912 17:38:02.979365 2222 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:02.979858 kubelet[2222]: E0912 17:38:02.979791 2222 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:03.079457 kubelet[2222]: I0912 17:38:03.079379 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0f625f93a8a746f1119115babc90803-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"f0f625f93a8a746f1119115babc90803\") " pod="kube-system/kube-scheduler-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:03.187666 kubelet[2222]: I0912 17:38:03.187610 2222 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:03.188093 kubelet[2222]: E0912 17:38:03.188047 2222 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:03.248530 containerd[1456]: time="2025-09-12T17:38:03.247741544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal,Uid:b312c41e3294e6d7eae972350709b229,Namespace:kube-system,Attempt:0,}" Sep 12 17:38:03.251450 containerd[1456]: time="2025-09-12T17:38:03.251362483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal,Uid:9205a42ad96fd6174f0b5dfcde644508,Namespace:kube-system,Attempt:0,}" Sep 12 17:38:03.268367 containerd[1456]: time="2025-09-12T17:38:03.268301885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal,Uid:f0f625f93a8a746f1119115babc90803,Namespace:kube-system,Attempt:0,}" Sep 12 17:38:03.378446 kubelet[2222]: E0912 17:38:03.378373 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="800ms" Sep 12 17:38:03.594371 kubelet[2222]: I0912 17:38:03.594228 2222 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:03.595097 kubelet[2222]: E0912 17:38:03.594873 2222 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:03.621843 kubelet[2222]: W0912 17:38:03.621661 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 12 17:38:03.621843 kubelet[2222]: E0912 17:38:03.621793 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:38:03.998931 kubelet[2222]: W0912 17:38:03.973168 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 12 17:38:03.998931 kubelet[2222]: E0912 17:38:03.973243 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:38:04.024047 kubelet[2222]: W0912 17:38:04.023877 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 12 17:38:04.024047 kubelet[2222]: E0912 17:38:04.024006 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:38:04.179948 kubelet[2222]: E0912 17:38:04.179862 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="1.6s" Sep 12 17:38:04.205094 kubelet[2222]: W0912 17:38:04.181256 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 12 17:38:04.205094 kubelet[2222]: E0912 17:38:04.181333 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:38:04.407740 kubelet[2222]: I0912 17:38:04.407497 2222 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:04.408108 kubelet[2222]: E0912 17:38:04.408066 2222 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:04.900253 kubelet[2222]: E0912 17:38:04.900195 2222 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:38:05.221717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2408314373.mount: Deactivated successfully. Sep 12 17:38:05.386947 containerd[1456]: time="2025-09-12T17:38:05.386865764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:38:05.389282 containerd[1456]: time="2025-09-12T17:38:05.389209232Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Sep 12 17:38:05.390759 containerd[1456]: time="2025-09-12T17:38:05.390682520Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:38:05.392065 containerd[1456]: time="2025-09-12T17:38:05.392020832Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:38:05.393765 containerd[1456]: time="2025-09-12T17:38:05.393700167Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:38:05.396580 containerd[1456]: time="2025-09-12T17:38:05.395548259Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:38:05.396580 containerd[1456]: time="2025-09-12T17:38:05.396417095Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:38:05.400836 containerd[1456]: time="2025-09-12T17:38:05.400764176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:38:05.403195 containerd[1456]: time="2025-09-12T17:38:05.403131673Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.151656576s" Sep 12 17:38:05.405248 containerd[1456]: time="2025-09-12T17:38:05.405184136Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.157316187s" Sep 12 17:38:05.407010 containerd[1456]: time="2025-09-12T17:38:05.406958758Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.138548015s" Sep 12 17:38:05.411398 kubelet[2222]: W0912 17:38:05.411342 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 12 17:38:05.411990 kubelet[2222]: E0912 17:38:05.411418 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:38:05.617640 containerd[1456]: time="2025-09-12T17:38:05.617007054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:05.617640 containerd[1456]: time="2025-09-12T17:38:05.617098443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:05.617640 containerd[1456]: time="2025-09-12T17:38:05.617126020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:05.617640 containerd[1456]: time="2025-09-12T17:38:05.617258672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:05.624135 containerd[1456]: time="2025-09-12T17:38:05.622892466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:05.624135 containerd[1456]: time="2025-09-12T17:38:05.622959608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:05.624135 containerd[1456]: time="2025-09-12T17:38:05.622985413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:05.625806 containerd[1456]: time="2025-09-12T17:38:05.624847154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:05.626610 containerd[1456]: time="2025-09-12T17:38:05.623386094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:05.626610 containerd[1456]: time="2025-09-12T17:38:05.626262469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:05.626610 containerd[1456]: time="2025-09-12T17:38:05.626288395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:05.626610 containerd[1456]: time="2025-09-12T17:38:05.626416646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:05.669704 systemd[1]: Started cri-containerd-aa143fbd98c11d544a720df8bf63592dd112ffdb1363daac54151bfe2336f5ff.scope - libcontainer container aa143fbd98c11d544a720df8bf63592dd112ffdb1363daac54151bfe2336f5ff. Sep 12 17:38:05.680233 systemd[1]: Started cri-containerd-59b2ca2415ec9d13ecaeb18f117391f8e66ad2b5edbd597a0402d26b68d9f242.scope - libcontainer container 59b2ca2415ec9d13ecaeb18f117391f8e66ad2b5edbd597a0402d26b68d9f242. Sep 12 17:38:05.685338 systemd[1]: Started cri-containerd-c34b6b603f6bdb8323dd969448da153e36df689b5d2fbcbf6ad333c01c92371a.scope - libcontainer container c34b6b603f6bdb8323dd969448da153e36df689b5d2fbcbf6ad333c01c92371a. Sep 12 17:38:05.778418 containerd[1456]: time="2025-09-12T17:38:05.777972640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal,Uid:9205a42ad96fd6174f0b5dfcde644508,Namespace:kube-system,Attempt:0,} returns sandbox id \"c34b6b603f6bdb8323dd969448da153e36df689b5d2fbcbf6ad333c01c92371a\"" Sep 12 17:38:05.782317 kubelet[2222]: E0912 17:38:05.781893 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="3.2s" Sep 12 17:38:05.783886 kubelet[2222]: E0912 17:38:05.783607 2222 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flat" Sep 12 17:38:05.790711 containerd[1456]: time="2025-09-12T17:38:05.790039864Z" level=info msg="CreateContainer within sandbox \"c34b6b603f6bdb8323dd969448da153e36df689b5d2fbcbf6ad333c01c92371a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:38:05.820455 containerd[1456]: time="2025-09-12T17:38:05.820345278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal,Uid:b312c41e3294e6d7eae972350709b229,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa143fbd98c11d544a720df8bf63592dd112ffdb1363daac54151bfe2336f5ff\"" Sep 12 17:38:05.822972 kubelet[2222]: E0912 17:38:05.822929 2222 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-21291" Sep 12 17:38:05.825741 containerd[1456]: time="2025-09-12T17:38:05.825684373Z" level=info msg="CreateContainer within sandbox \"aa143fbd98c11d544a720df8bf63592dd112ffdb1363daac54151bfe2336f5ff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:38:05.836164 containerd[1456]: time="2025-09-12T17:38:05.836110432Z" level=info msg="CreateContainer within sandbox \"c34b6b603f6bdb8323dd969448da153e36df689b5d2fbcbf6ad333c01c92371a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e2de11c2ea7a573ee364a379efdf53ede5a499f184181e29b44999574f6ab25\"" Sep 12 17:38:05.838650 containerd[1456]: time="2025-09-12T17:38:05.837008924Z" level=info msg="StartContainer for \"6e2de11c2ea7a573ee364a379efdf53ede5a499f184181e29b44999574f6ab25\"" Sep 12 17:38:05.844130 containerd[1456]: time="2025-09-12T17:38:05.844080243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal,Uid:f0f625f93a8a746f1119115babc90803,Namespace:kube-system,Attempt:0,} returns sandbox id \"59b2ca2415ec9d13ecaeb18f117391f8e66ad2b5edbd597a0402d26b68d9f242\"" Sep 12 17:38:05.847475 kubelet[2222]: E0912 17:38:05.846965 2222 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-21291" Sep 12 17:38:05.849895 containerd[1456]: time="2025-09-12T17:38:05.849840993Z" level=info msg="CreateContainer within sandbox \"59b2ca2415ec9d13ecaeb18f117391f8e66ad2b5edbd597a0402d26b68d9f242\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:38:05.854841 containerd[1456]: time="2025-09-12T17:38:05.854792698Z" level=info msg="CreateContainer within sandbox \"aa143fbd98c11d544a720df8bf63592dd112ffdb1363daac54151bfe2336f5ff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3bead66a32af4e1188d93b05e028ed2f3a4e26d508b3a8539afecb06f7c45e24\"" Sep 12 17:38:05.858318 containerd[1456]: time="2025-09-12T17:38:05.858275432Z" level=info msg="StartContainer for \"3bead66a32af4e1188d93b05e028ed2f3a4e26d508b3a8539afecb06f7c45e24\"" Sep 12 17:38:05.894310 containerd[1456]: time="2025-09-12T17:38:05.892992389Z" level=info msg="CreateContainer within sandbox \"59b2ca2415ec9d13ecaeb18f117391f8e66ad2b5edbd597a0402d26b68d9f242\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fd1d96f140657f31f941587d49091e2c9ed0fd14663750759afdcac04f5f8d61\"" Sep 12 17:38:05.897521 containerd[1456]: time="2025-09-12T17:38:05.896663517Z" level=info msg="StartContainer for \"fd1d96f140657f31f941587d49091e2c9ed0fd14663750759afdcac04f5f8d61\"" Sep 12 17:38:05.904386 systemd[1]: Started cri-containerd-6e2de11c2ea7a573ee364a379efdf53ede5a499f184181e29b44999574f6ab25.scope - libcontainer container 6e2de11c2ea7a573ee364a379efdf53ede5a499f184181e29b44999574f6ab25. Sep 12 17:38:05.932827 systemd[1]: Started cri-containerd-3bead66a32af4e1188d93b05e028ed2f3a4e26d508b3a8539afecb06f7c45e24.scope - libcontainer container 3bead66a32af4e1188d93b05e028ed2f3a4e26d508b3a8539afecb06f7c45e24. Sep 12 17:38:05.972805 systemd[1]: Started cri-containerd-fd1d96f140657f31f941587d49091e2c9ed0fd14663750759afdcac04f5f8d61.scope - libcontainer container fd1d96f140657f31f941587d49091e2c9ed0fd14663750759afdcac04f5f8d61. Sep 12 17:38:06.001919 kubelet[2222]: W0912 17:38:06.001761 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 12 17:38:06.001919 kubelet[2222]: E0912 17:38:06.001921 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:38:06.026456 kubelet[2222]: I0912 17:38:06.026049 2222 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:06.029289 kubelet[2222]: E0912 17:38:06.027358 2222 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:06.045358 containerd[1456]: time="2025-09-12T17:38:06.045300775Z" level=info msg="StartContainer for \"6e2de11c2ea7a573ee364a379efdf53ede5a499f184181e29b44999574f6ab25\" returns successfully" Sep 12 17:38:06.067983 containerd[1456]: time="2025-09-12T17:38:06.067896909Z" level=info msg="StartContainer for \"3bead66a32af4e1188d93b05e028ed2f3a4e26d508b3a8539afecb06f7c45e24\" returns successfully" Sep 12 17:38:06.135841 containerd[1456]: time="2025-09-12T17:38:06.135763027Z" level=info msg="StartContainer for \"fd1d96f140657f31f941587d49091e2c9ed0fd14663750759afdcac04f5f8d61\" returns successfully" Sep 12 17:38:06.808595 update_engine[1447]: I20250912 17:38:06.807617 1447 update_attempter.cc:509] Updating boot flags... Sep 12 17:38:06.962627 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2506) Sep 12 17:38:07.134547 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2502) Sep 12 17:38:09.235450 kubelet[2222]: I0912 17:38:09.234725 2222 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:09.567715 kubelet[2222]: E0912 17:38:09.566194 2222 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" not found" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:09.599837 kubelet[2222]: I0912 17:38:09.599672 2222 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:09.600277 kubelet[2222]: E0912 17:38:09.599750 2222 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\": node \"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" not found" Sep 12 17:38:09.748992 kubelet[2222]: I0912 17:38:09.748668 2222 apiserver.go:52] "Watching apiserver" Sep 12 17:38:09.777798 kubelet[2222]: I0912 17:38:09.777759 2222 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:38:09.874079 kubelet[2222]: E0912 17:38:09.873584 2222 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:11.592636 systemd[1]: Reloading requested from client PID 2517 ('systemctl') (unit session-9.scope)... Sep 12 17:38:11.592670 systemd[1]: Reloading... Sep 12 17:38:11.768541 zram_generator::config[2560]: No configuration found. Sep 12 17:38:11.906990 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:38:12.043359 systemd[1]: Reloading finished in 449 ms. Sep 12 17:38:12.117974 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:38:12.136654 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:38:12.137025 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:38:12.137132 systemd[1]: kubelet.service: Consumed 1.694s CPU time, 129.8M memory peak, 0B memory swap peak. Sep 12 17:38:12.142946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:38:12.462186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:38:12.467852 (kubelet)[2605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:38:12.559265 kubelet[2605]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:38:12.561589 kubelet[2605]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:38:12.561589 kubelet[2605]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:38:12.561589 kubelet[2605]: I0912 17:38:12.560148 2605 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:38:12.574773 kubelet[2605]: I0912 17:38:12.574216 2605 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:38:12.575150 kubelet[2605]: I0912 17:38:12.575109 2605 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:38:12.575636 kubelet[2605]: I0912 17:38:12.575598 2605 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:38:12.578114 kubelet[2605]: I0912 17:38:12.577900 2605 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:38:12.583046 kubelet[2605]: I0912 17:38:12.582811 2605 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:38:12.591722 kubelet[2605]: E0912 17:38:12.591629 2605 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:38:12.591722 kubelet[2605]: I0912 17:38:12.591717 2605 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:38:12.599534 kubelet[2605]: I0912 17:38:12.597626 2605 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:38:12.599534 kubelet[2605]: I0912 17:38:12.597787 2605 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:38:12.599534 kubelet[2605]: I0912 17:38:12.597972 2605 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:38:12.599902 kubelet[2605]: I0912 17:38:12.598015 2605 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:38:12.599902 kubelet[2605]: I0912 17:38:12.598301 2605 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:38:12.599902 kubelet[2605]: I0912 17:38:12.598318 2605 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:38:12.599902 kubelet[2605]: I0912 17:38:12.598358 2605 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:38:12.599902 kubelet[2605]: I0912 17:38:12.598499 2605 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:38:12.599902 kubelet[2605]: I0912 17:38:12.598546 2605 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:38:12.599902 kubelet[2605]: I0912 17:38:12.598591 2605 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:38:12.599902 kubelet[2605]: I0912 17:38:12.598604 2605 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:38:12.613536 kubelet[2605]: I0912 17:38:12.613454 2605 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:38:12.614310 kubelet[2605]: I0912 17:38:12.614278 2605 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:38:12.618457 kubelet[2605]: I0912 17:38:12.617017 2605 server.go:1274] "Started kubelet" Sep 12 17:38:12.624392 kubelet[2605]: I0912 17:38:12.624364 2605 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:38:12.629935 kubelet[2605]: I0912 17:38:12.629859 2605 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:38:12.631415 kubelet[2605]: I0912 17:38:12.631364 2605 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:38:12.631901 kubelet[2605]: I0912 17:38:12.631851 2605 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:38:12.645447 kubelet[2605]: I0912 17:38:12.645192 2605 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:38:12.645447 kubelet[2605]: I0912 17:38:12.635745 2605 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:38:12.645447 kubelet[2605]: I0912 17:38:12.633634 2605 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:38:12.646287 kubelet[2605]: I0912 17:38:12.635768 2605 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:38:12.646287 kubelet[2605]: I0912 17:38:12.645789 2605 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:38:12.646287 kubelet[2605]: E0912 17:38:12.635989 2605 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" not found" Sep 12 17:38:12.656148 kubelet[2605]: I0912 17:38:12.656103 2605 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:38:12.656918 kubelet[2605]: I0912 17:38:12.656293 2605 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:38:12.657375 kubelet[2605]: I0912 17:38:12.657322 2605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:38:12.659452 kubelet[2605]: I0912 17:38:12.659413 2605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:38:12.660102 kubelet[2605]: I0912 17:38:12.659598 2605 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:38:12.660102 kubelet[2605]: I0912 17:38:12.659637 2605 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:38:12.660102 kubelet[2605]: E0912 17:38:12.659723 2605 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:38:12.673041 kubelet[2605]: I0912 17:38:12.673007 2605 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:38:12.766292 kubelet[2605]: E0912 17:38:12.766158 2605 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:38:12.773245 kubelet[2605]: I0912 17:38:12.773214 2605 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:38:12.773245 kubelet[2605]: I0912 17:38:12.773240 2605 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:38:12.773465 kubelet[2605]: I0912 17:38:12.773288 2605 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:38:12.774303 kubelet[2605]: I0912 17:38:12.773881 2605 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:38:12.774303 kubelet[2605]: I0912 17:38:12.774066 2605 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:38:12.774303 kubelet[2605]: I0912 17:38:12.774120 2605 policy_none.go:49] "None policy: Start" Sep 12 17:38:12.777808 kubelet[2605]: I0912 17:38:12.775236 2605 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:38:12.777808 kubelet[2605]: I0912 17:38:12.775272 2605 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:38:12.777808 kubelet[2605]: I0912 17:38:12.775498 2605 state_mem.go:75] "Updated machine memory state" Sep 12 17:38:12.784386 kubelet[2605]: I0912 17:38:12.784356 2605 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:38:12.785766 kubelet[2605]: I0912 17:38:12.785124 2605 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:38:12.785766 kubelet[2605]: I0912 17:38:12.785146 2605 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:38:12.785766 kubelet[2605]: I0912 17:38:12.785612 2605 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:38:12.902248 kubelet[2605]: I0912 17:38:12.902208 2605 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:12.913301 kubelet[2605]: I0912 17:38:12.913253 2605 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:12.913566 kubelet[2605]: I0912 17:38:12.913549 2605 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:12.982189 kubelet[2605]: W0912 17:38:12.982130 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Sep 12 17:38:12.984064 kubelet[2605]: W0912 17:38:12.983720 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Sep 12 17:38:12.984064 kubelet[2605]: W0912 17:38:12.983806 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Sep 12 17:38:13.048336 kubelet[2605]: I0912 17:38:13.048198 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b312c41e3294e6d7eae972350709b229-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"b312c41e3294e6d7eae972350709b229\") " pod="kube-system/kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:13.048336 kubelet[2605]: I0912 17:38:13.048258 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b312c41e3294e6d7eae972350709b229-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"b312c41e3294e6d7eae972350709b229\") " pod="kube-system/kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:13.048336 kubelet[2605]: I0912 17:38:13.048288 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b312c41e3294e6d7eae972350709b229-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"b312c41e3294e6d7eae972350709b229\") " pod="kube-system/kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:13.048336 kubelet[2605]: I0912 17:38:13.048318 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9205a42ad96fd6174f0b5dfcde644508-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"9205a42ad96fd6174f0b5dfcde644508\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:13.048709 kubelet[2605]: I0912 17:38:13.048347 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9205a42ad96fd6174f0b5dfcde644508-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"9205a42ad96fd6174f0b5dfcde644508\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:13.048709 kubelet[2605]: I0912 17:38:13.048377 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9205a42ad96fd6174f0b5dfcde644508-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"9205a42ad96fd6174f0b5dfcde644508\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:13.049844 kubelet[2605]: I0912 17:38:13.049591 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0f625f93a8a746f1119115babc90803-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"f0f625f93a8a746f1119115babc90803\") " pod="kube-system/kube-scheduler-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:13.049844 kubelet[2605]: I0912 17:38:13.049725 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9205a42ad96fd6174f0b5dfcde644508-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"9205a42ad96fd6174f0b5dfcde644508\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:13.049844 kubelet[2605]: I0912 17:38:13.049783 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9205a42ad96fd6174f0b5dfcde644508-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" (UID: \"9205a42ad96fd6174f0b5dfcde644508\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:13.609810 kubelet[2605]: I0912 17:38:13.608995 2605 apiserver.go:52] "Watching apiserver" Sep 12 17:38:13.646228 kubelet[2605]: I0912 17:38:13.646122 2605 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:38:13.738882 kubelet[2605]: I0912 17:38:13.738806 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" podStartSLOduration=1.738780341 podStartE2EDuration="1.738780341s" podCreationTimestamp="2025-09-12 17:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:38:13.737605721 +0000 UTC m=+1.261405483" watchObservedRunningTime="2025-09-12 17:38:13.738780341 +0000 UTC m=+1.262580114" Sep 12 17:38:13.775010 kubelet[2605]: I0912 17:38:13.774583 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" podStartSLOduration=1.7745548 podStartE2EDuration="1.7745548s" podCreationTimestamp="2025-09-12 17:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:38:13.77070477 +0000 UTC m=+1.294504529" watchObservedRunningTime="2025-09-12 17:38:13.7745548 +0000 UTC m=+1.298354564" Sep 12 17:38:13.775010 kubelet[2605]: I0912 17:38:13.774739 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" podStartSLOduration=1.7747317059999999 podStartE2EDuration="1.774731706s" podCreationTimestamp="2025-09-12 17:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:38:13.758705603 +0000 UTC m=+1.282505376" watchObservedRunningTime="2025-09-12 17:38:13.774731706 +0000 UTC m=+1.298531454" Sep 12 17:38:16.935973 kubelet[2605]: I0912 17:38:16.935772 2605 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:38:16.936810 containerd[1456]: time="2025-09-12T17:38:16.936756232Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:38:16.937332 kubelet[2605]: I0912 17:38:16.937064 2605 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:38:17.422117 systemd[1]: Created slice kubepods-besteffort-pod7dd5c0c2_6117_4754_9e6f_46da031dfd8f.slice - libcontainer container kubepods-besteffort-pod7dd5c0c2_6117_4754_9e6f_46da031dfd8f.slice. Sep 12 17:38:17.482310 kubelet[2605]: I0912 17:38:17.481680 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7dd5c0c2-6117-4754-9e6f-46da031dfd8f-xtables-lock\") pod \"kube-proxy-6zwfh\" (UID: \"7dd5c0c2-6117-4754-9e6f-46da031dfd8f\") " pod="kube-system/kube-proxy-6zwfh" Sep 12 17:38:17.482310 kubelet[2605]: I0912 17:38:17.481736 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7dd5c0c2-6117-4754-9e6f-46da031dfd8f-lib-modules\") pod \"kube-proxy-6zwfh\" (UID: \"7dd5c0c2-6117-4754-9e6f-46da031dfd8f\") " pod="kube-system/kube-proxy-6zwfh" Sep 12 17:38:17.482310 kubelet[2605]: I0912 17:38:17.481772 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cssfr\" (UniqueName: \"kubernetes.io/projected/7dd5c0c2-6117-4754-9e6f-46da031dfd8f-kube-api-access-cssfr\") pod \"kube-proxy-6zwfh\" (UID: \"7dd5c0c2-6117-4754-9e6f-46da031dfd8f\") " pod="kube-system/kube-proxy-6zwfh" Sep 12 17:38:17.482310 kubelet[2605]: I0912 17:38:17.481801 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7dd5c0c2-6117-4754-9e6f-46da031dfd8f-kube-proxy\") pod \"kube-proxy-6zwfh\" (UID: \"7dd5c0c2-6117-4754-9e6f-46da031dfd8f\") " pod="kube-system/kube-proxy-6zwfh" Sep 12 17:38:17.732710 containerd[1456]: time="2025-09-12T17:38:17.732123708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6zwfh,Uid:7dd5c0c2-6117-4754-9e6f-46da031dfd8f,Namespace:kube-system,Attempt:0,}" Sep 12 17:38:17.784644 containerd[1456]: time="2025-09-12T17:38:17.783941784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:17.784644 containerd[1456]: time="2025-09-12T17:38:17.784203140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:17.784644 containerd[1456]: time="2025-09-12T17:38:17.784256344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:17.784644 containerd[1456]: time="2025-09-12T17:38:17.784428287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:17.832818 systemd[1]: Started cri-containerd-df724ccde893a59e883d6e57666093a5e626d4276b76c9c83e665e8c9394360f.scope - libcontainer container df724ccde893a59e883d6e57666093a5e626d4276b76c9c83e665e8c9394360f. Sep 12 17:38:17.877272 containerd[1456]: time="2025-09-12T17:38:17.877214931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6zwfh,Uid:7dd5c0c2-6117-4754-9e6f-46da031dfd8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"df724ccde893a59e883d6e57666093a5e626d4276b76c9c83e665e8c9394360f\"" Sep 12 17:38:17.883535 containerd[1456]: time="2025-09-12T17:38:17.883453225Z" level=info msg="CreateContainer within sandbox \"df724ccde893a59e883d6e57666093a5e626d4276b76c9c83e665e8c9394360f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:38:17.924110 containerd[1456]: time="2025-09-12T17:38:17.922367588Z" level=info msg="CreateContainer within sandbox \"df724ccde893a59e883d6e57666093a5e626d4276b76c9c83e665e8c9394360f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f83b4de1a63d965c5666f0a77265a60b20d980e8319aef7d3a29f384bc6c8f4d\"" Sep 12 17:38:17.930471 containerd[1456]: time="2025-09-12T17:38:17.930391337Z" level=info msg="StartContainer for \"f83b4de1a63d965c5666f0a77265a60b20d980e8319aef7d3a29f384bc6c8f4d\"" Sep 12 17:38:17.986802 systemd[1]: Started cri-containerd-f83b4de1a63d965c5666f0a77265a60b20d980e8319aef7d3a29f384bc6c8f4d.scope - libcontainer container f83b4de1a63d965c5666f0a77265a60b20d980e8319aef7d3a29f384bc6c8f4d. Sep 12 17:38:18.067577 systemd[1]: Created slice kubepods-besteffort-pod00653d4c_545d_45cc_a845_09b560a2ce2b.slice - libcontainer container kubepods-besteffort-pod00653d4c_545d_45cc_a845_09b560a2ce2b.slice. Sep 12 17:38:18.083627 kubelet[2605]: I0912 17:38:18.083459 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00653d4c-545d-45cc-a845-09b560a2ce2b-var-lib-calico\") pod \"tigera-operator-58fc44c59b-ltn44\" (UID: \"00653d4c-545d-45cc-a845-09b560a2ce2b\") " pod="tigera-operator/tigera-operator-58fc44c59b-ltn44" Sep 12 17:38:18.083627 kubelet[2605]: I0912 17:38:18.083551 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btx7f\" (UniqueName: \"kubernetes.io/projected/00653d4c-545d-45cc-a845-09b560a2ce2b-kube-api-access-btx7f\") pod \"tigera-operator-58fc44c59b-ltn44\" (UID: \"00653d4c-545d-45cc-a845-09b560a2ce2b\") " pod="tigera-operator/tigera-operator-58fc44c59b-ltn44" Sep 12 17:38:18.116441 containerd[1456]: time="2025-09-12T17:38:18.116375603Z" level=info msg="StartContainer for \"f83b4de1a63d965c5666f0a77265a60b20d980e8319aef7d3a29f384bc6c8f4d\" returns successfully" Sep 12 17:38:18.381901 containerd[1456]: time="2025-09-12T17:38:18.380922267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-ltn44,Uid:00653d4c-545d-45cc-a845-09b560a2ce2b,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:38:18.431147 containerd[1456]: time="2025-09-12T17:38:18.430454661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:18.431147 containerd[1456]: time="2025-09-12T17:38:18.430557988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:18.431147 containerd[1456]: time="2025-09-12T17:38:18.430587597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:18.432566 containerd[1456]: time="2025-09-12T17:38:18.432374475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:18.463796 systemd[1]: Started cri-containerd-cad9c212476e7b88524a0ea5989e9999929b8b4da7226142aa88e0c4c6668c93.scope - libcontainer container cad9c212476e7b88524a0ea5989e9999929b8b4da7226142aa88e0c4c6668c93. Sep 12 17:38:18.543770 containerd[1456]: time="2025-09-12T17:38:18.543568264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-ltn44,Uid:00653d4c-545d-45cc-a845-09b560a2ce2b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cad9c212476e7b88524a0ea5989e9999929b8b4da7226142aa88e0c4c6668c93\"" Sep 12 17:38:18.547965 containerd[1456]: time="2025-09-12T17:38:18.547896879Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:38:18.607063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4138955048.mount: Deactivated successfully. Sep 12 17:38:19.691251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3453873482.mount: Deactivated successfully. Sep 12 17:38:20.698668 containerd[1456]: time="2025-09-12T17:38:20.698607005Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:20.701286 containerd[1456]: time="2025-09-12T17:38:20.700972698Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:38:20.704553 containerd[1456]: time="2025-09-12T17:38:20.702752012Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:20.707539 containerd[1456]: time="2025-09-12T17:38:20.707461373Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:20.709238 containerd[1456]: time="2025-09-12T17:38:20.709176332Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.161087834s" Sep 12 17:38:20.709238 containerd[1456]: time="2025-09-12T17:38:20.709241953Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:38:20.712848 containerd[1456]: time="2025-09-12T17:38:20.712805010Z" level=info msg="CreateContainer within sandbox \"cad9c212476e7b88524a0ea5989e9999929b8b4da7226142aa88e0c4c6668c93\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:38:20.733677 containerd[1456]: time="2025-09-12T17:38:20.733617069Z" level=info msg="CreateContainer within sandbox \"cad9c212476e7b88524a0ea5989e9999929b8b4da7226142aa88e0c4c6668c93\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8a6b7dc9b751bcdf6a0ed6dd3bb15a40314756b0d50660fc7bdd989a9fc4ed0e\"" Sep 12 17:38:20.734921 containerd[1456]: time="2025-09-12T17:38:20.734855998Z" level=info msg="StartContainer for \"8a6b7dc9b751bcdf6a0ed6dd3bb15a40314756b0d50660fc7bdd989a9fc4ed0e\"" Sep 12 17:38:20.785011 systemd[1]: run-containerd-runc-k8s.io-8a6b7dc9b751bcdf6a0ed6dd3bb15a40314756b0d50660fc7bdd989a9fc4ed0e-runc.P6TaoA.mount: Deactivated successfully. Sep 12 17:38:20.800930 systemd[1]: Started cri-containerd-8a6b7dc9b751bcdf6a0ed6dd3bb15a40314756b0d50660fc7bdd989a9fc4ed0e.scope - libcontainer container 8a6b7dc9b751bcdf6a0ed6dd3bb15a40314756b0d50660fc7bdd989a9fc4ed0e. Sep 12 17:38:20.840570 containerd[1456]: time="2025-09-12T17:38:20.840489253Z" level=info msg="StartContainer for \"8a6b7dc9b751bcdf6a0ed6dd3bb15a40314756b0d50660fc7bdd989a9fc4ed0e\" returns successfully" Sep 12 17:38:21.764996 kubelet[2605]: I0912 17:38:21.764857 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6zwfh" podStartSLOduration=4.764830166 podStartE2EDuration="4.764830166s" podCreationTimestamp="2025-09-12 17:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:38:18.757860735 +0000 UTC m=+6.281660507" watchObservedRunningTime="2025-09-12 17:38:21.764830166 +0000 UTC m=+9.288629936" Sep 12 17:38:22.102652 kubelet[2605]: I0912 17:38:22.102432 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-ltn44" podStartSLOduration=2.937633096 podStartE2EDuration="5.10240123s" podCreationTimestamp="2025-09-12 17:38:17 +0000 UTC" firstStartedPulling="2025-09-12 17:38:18.546199278 +0000 UTC m=+6.069999039" lastFinishedPulling="2025-09-12 17:38:20.710967424 +0000 UTC m=+8.234767173" observedRunningTime="2025-09-12 17:38:21.765389896 +0000 UTC m=+9.289189669" watchObservedRunningTime="2025-09-12 17:38:22.10240123 +0000 UTC m=+9.626201000" Sep 12 17:38:28.298891 sudo[1732]: pam_unix(sudo:session): session closed for user root Sep 12 17:38:28.359851 sshd[1729]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:28.370147 systemd-logind[1436]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:38:28.371774 systemd[1]: sshd@8-10.128.0.49:22-139.178.89.65:48340.service: Deactivated successfully. Sep 12 17:38:28.379559 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:38:28.380345 systemd[1]: session-9.scope: Consumed 6.285s CPU time, 158.7M memory peak, 0B memory swap peak. Sep 12 17:38:28.382835 systemd-logind[1436]: Removed session 9. Sep 12 17:38:33.585036 systemd[1]: Created slice kubepods-besteffort-pod5b6bb65b_de04_4869_a922_c1be24746d6e.slice - libcontainer container kubepods-besteffort-pod5b6bb65b_de04_4869_a922_c1be24746d6e.slice. Sep 12 17:38:33.684983 kubelet[2605]: I0912 17:38:33.684923 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksmwv\" (UniqueName: \"kubernetes.io/projected/5b6bb65b-de04-4869-a922-c1be24746d6e-kube-api-access-ksmwv\") pod \"calico-typha-58b5dbb77b-rwfrw\" (UID: \"5b6bb65b-de04-4869-a922-c1be24746d6e\") " pod="calico-system/calico-typha-58b5dbb77b-rwfrw" Sep 12 17:38:33.686304 kubelet[2605]: I0912 17:38:33.685355 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b6bb65b-de04-4869-a922-c1be24746d6e-tigera-ca-bundle\") pod \"calico-typha-58b5dbb77b-rwfrw\" (UID: \"5b6bb65b-de04-4869-a922-c1be24746d6e\") " pod="calico-system/calico-typha-58b5dbb77b-rwfrw" Sep 12 17:38:33.686304 kubelet[2605]: I0912 17:38:33.685582 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5b6bb65b-de04-4869-a922-c1be24746d6e-typha-certs\") pod \"calico-typha-58b5dbb77b-rwfrw\" (UID: \"5b6bb65b-de04-4869-a922-c1be24746d6e\") " pod="calico-system/calico-typha-58b5dbb77b-rwfrw" Sep 12 17:38:33.779823 systemd[1]: Created slice kubepods-besteffort-pod1e69b657_e032_468a_bbe2_3ce4180c52d2.slice - libcontainer container kubepods-besteffort-pod1e69b657_e032_468a_bbe2_3ce4180c52d2.slice. Sep 12 17:38:33.787099 kubelet[2605]: W0912 17:38:33.787056 2605 reflector.go:561] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal' and this object Sep 12 17:38:33.787311 kubelet[2605]: E0912 17:38:33.787115 2605 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"cni-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-config\" is forbidden: User \"system:node:ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal' and this object" logger="UnhandledError" Sep 12 17:38:33.790714 kubelet[2605]: W0912 17:38:33.790672 2605 reflector.go:561] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal' and this object Sep 12 17:38:33.790842 kubelet[2605]: E0912 17:38:33.790736 2605 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal' and this object" logger="UnhandledError" Sep 12 17:38:33.887848 kubelet[2605]: I0912 17:38:33.887699 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e69b657-e032-468a-bbe2-3ce4180c52d2-var-lib-calico\") pod \"calico-node-4zs64\" (UID: \"1e69b657-e032-468a-bbe2-3ce4180c52d2\") " pod="calico-system/calico-node-4zs64" Sep 12 17:38:33.887848 kubelet[2605]: I0912 17:38:33.887759 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1e69b657-e032-468a-bbe2-3ce4180c52d2-var-run-calico\") pod \"calico-node-4zs64\" (UID: \"1e69b657-e032-468a-bbe2-3ce4180c52d2\") " pod="calico-system/calico-node-4zs64" Sep 12 17:38:33.887848 kubelet[2605]: I0912 17:38:33.887785 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e69b657-e032-468a-bbe2-3ce4180c52d2-xtables-lock\") pod \"calico-node-4zs64\" (UID: \"1e69b657-e032-468a-bbe2-3ce4180c52d2\") " pod="calico-system/calico-node-4zs64" Sep 12 17:38:33.887848 kubelet[2605]: I0912 17:38:33.887815 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1e69b657-e032-468a-bbe2-3ce4180c52d2-flexvol-driver-host\") pod \"calico-node-4zs64\" (UID: \"1e69b657-e032-468a-bbe2-3ce4180c52d2\") " pod="calico-system/calico-node-4zs64" Sep 12 17:38:33.888173 kubelet[2605]: I0912 17:38:33.887872 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1e69b657-e032-468a-bbe2-3ce4180c52d2-policysync\") pod \"calico-node-4zs64\" (UID: \"1e69b657-e032-468a-bbe2-3ce4180c52d2\") " pod="calico-system/calico-node-4zs64" Sep 12 17:38:33.888173 kubelet[2605]: I0912 17:38:33.887916 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn7c5\" (UniqueName: \"kubernetes.io/projected/1e69b657-e032-468a-bbe2-3ce4180c52d2-kube-api-access-zn7c5\") pod \"calico-node-4zs64\" (UID: \"1e69b657-e032-468a-bbe2-3ce4180c52d2\") " pod="calico-system/calico-node-4zs64" Sep 12 17:38:33.888173 kubelet[2605]: I0912 17:38:33.887964 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e69b657-e032-468a-bbe2-3ce4180c52d2-lib-modules\") pod \"calico-node-4zs64\" (UID: \"1e69b657-e032-468a-bbe2-3ce4180c52d2\") " pod="calico-system/calico-node-4zs64" Sep 12 17:38:33.888173 kubelet[2605]: I0912 17:38:33.887990 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e69b657-e032-468a-bbe2-3ce4180c52d2-tigera-ca-bundle\") pod \"calico-node-4zs64\" (UID: \"1e69b657-e032-468a-bbe2-3ce4180c52d2\") " pod="calico-system/calico-node-4zs64" Sep 12 17:38:33.888173 kubelet[2605]: I0912 17:38:33.888015 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1e69b657-e032-468a-bbe2-3ce4180c52d2-cni-net-dir\") pod \"calico-node-4zs64\" (UID: \"1e69b657-e032-468a-bbe2-3ce4180c52d2\") " pod="calico-system/calico-node-4zs64" Sep 12 17:38:33.888173 kubelet[2605]: I0912 17:38:33.888048 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1e69b657-e032-468a-bbe2-3ce4180c52d2-cni-bin-dir\") pod \"calico-node-4zs64\" (UID: \"1e69b657-e032-468a-bbe2-3ce4180c52d2\") " pod="calico-system/calico-node-4zs64" Sep 12 17:38:33.888173 kubelet[2605]: I0912 17:38:33.888077 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1e69b657-e032-468a-bbe2-3ce4180c52d2-cni-log-dir\") pod \"calico-node-4zs64\" (UID: \"1e69b657-e032-468a-bbe2-3ce4180c52d2\") " pod="calico-system/calico-node-4zs64" Sep 12 17:38:33.888173 kubelet[2605]: I0912 17:38:33.888113 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1e69b657-e032-468a-bbe2-3ce4180c52d2-node-certs\") pod \"calico-node-4zs64\" (UID: \"1e69b657-e032-468a-bbe2-3ce4180c52d2\") " pod="calico-system/calico-node-4zs64" Sep 12 17:38:33.892896 containerd[1456]: time="2025-09-12T17:38:33.892848677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58b5dbb77b-rwfrw,Uid:5b6bb65b-de04-4869-a922-c1be24746d6e,Namespace:calico-system,Attempt:0,}" Sep 12 17:38:33.937351 containerd[1456]: time="2025-09-12T17:38:33.936947414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:33.937351 containerd[1456]: time="2025-09-12T17:38:33.937161727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:33.937351 containerd[1456]: time="2025-09-12T17:38:33.937284434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:33.939466 containerd[1456]: time="2025-09-12T17:38:33.938439270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:33.992092 systemd[1]: Started cri-containerd-1278cc796d6807d0e2c0106563e67e5929530ce1e76e53d0cfa8df843001793f.scope - libcontainer container 1278cc796d6807d0e2c0106563e67e5929530ce1e76e53d0cfa8df843001793f. Sep 12 17:38:33.997323 kubelet[2605]: E0912 17:38:33.997276 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:33.997323 kubelet[2605]: W0912 17:38:33.997307 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:33.997659 kubelet[2605]: E0912 17:38:33.997341 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.041413 kubelet[2605]: E0912 17:38:34.041367 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.041413 kubelet[2605]: W0912 17:38:34.041401 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.041658 kubelet[2605]: E0912 17:38:34.041432 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.091680 kubelet[2605]: E0912 17:38:34.091633 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.091680 kubelet[2605]: W0912 17:38:34.091674 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.091929 kubelet[2605]: E0912 17:38:34.091715 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.093920 containerd[1456]: time="2025-09-12T17:38:34.093293316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58b5dbb77b-rwfrw,Uid:5b6bb65b-de04-4869-a922-c1be24746d6e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1278cc796d6807d0e2c0106563e67e5929530ce1e76e53d0cfa8df843001793f\"" Sep 12 17:38:34.096463 containerd[1456]: time="2025-09-12T17:38:34.096403221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:38:34.123097 kubelet[2605]: E0912 17:38:34.123035 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whntj" podUID="3a3ff327-190a-4ebe-85de-f1209e1870ef" Sep 12 17:38:34.193662 kubelet[2605]: E0912 17:38:34.193606 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.193662 kubelet[2605]: W0912 17:38:34.193649 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.193952 kubelet[2605]: E0912 17:38:34.193684 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.201720 kubelet[2605]: E0912 17:38:34.201668 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.201720 kubelet[2605]: W0912 17:38:34.201713 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.202129 kubelet[2605]: E0912 17:38:34.201746 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.203878 kubelet[2605]: E0912 17:38:34.203824 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.203878 kubelet[2605]: W0912 17:38:34.203852 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.204097 kubelet[2605]: E0912 17:38:34.203882 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.204444 kubelet[2605]: E0912 17:38:34.204236 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.204444 kubelet[2605]: W0912 17:38:34.204254 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.204444 kubelet[2605]: E0912 17:38:34.204275 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.204891 kubelet[2605]: E0912 17:38:34.204639 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.204891 kubelet[2605]: W0912 17:38:34.204657 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.204891 kubelet[2605]: E0912 17:38:34.204675 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.205061 kubelet[2605]: E0912 17:38:34.205025 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.205061 kubelet[2605]: W0912 17:38:34.205040 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.205061 kubelet[2605]: E0912 17:38:34.205057 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.205872 kubelet[2605]: E0912 17:38:34.205361 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.205872 kubelet[2605]: W0912 17:38:34.205379 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.205872 kubelet[2605]: E0912 17:38:34.205396 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.205872 kubelet[2605]: E0912 17:38:34.205749 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.205872 kubelet[2605]: W0912 17:38:34.205763 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.205872 kubelet[2605]: E0912 17:38:34.205781 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.206245 kubelet[2605]: E0912 17:38:34.206084 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.206245 kubelet[2605]: W0912 17:38:34.206098 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.206245 kubelet[2605]: E0912 17:38:34.206115 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.207545 kubelet[2605]: E0912 17:38:34.207239 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.207545 kubelet[2605]: W0912 17:38:34.207256 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.207545 kubelet[2605]: E0912 17:38:34.207276 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.207767 kubelet[2605]: E0912 17:38:34.207636 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.207767 kubelet[2605]: W0912 17:38:34.207651 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.207767 kubelet[2605]: E0912 17:38:34.207669 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.208692 kubelet[2605]: E0912 17:38:34.208637 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.208790 kubelet[2605]: W0912 17:38:34.208745 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.208790 kubelet[2605]: E0912 17:38:34.208768 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.209117 kubelet[2605]: E0912 17:38:34.209091 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.209117 kubelet[2605]: W0912 17:38:34.209117 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.209260 kubelet[2605]: E0912 17:38:34.209135 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.209711 kubelet[2605]: E0912 17:38:34.209660 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.209711 kubelet[2605]: W0912 17:38:34.209709 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.209889 kubelet[2605]: E0912 17:38:34.209728 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.210917 kubelet[2605]: E0912 17:38:34.210892 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.210917 kubelet[2605]: W0912 17:38:34.210916 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.210917 kubelet[2605]: E0912 17:38:34.210935 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.211281 kubelet[2605]: E0912 17:38:34.211259 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.211281 kubelet[2605]: W0912 17:38:34.211281 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.211446 kubelet[2605]: E0912 17:38:34.211299 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.211807 kubelet[2605]: E0912 17:38:34.211649 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.211807 kubelet[2605]: W0912 17:38:34.211671 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.211807 kubelet[2605]: E0912 17:38:34.211698 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.213538 kubelet[2605]: E0912 17:38:34.212739 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.213538 kubelet[2605]: W0912 17:38:34.212759 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.213538 kubelet[2605]: E0912 17:38:34.212776 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.213538 kubelet[2605]: E0912 17:38:34.213107 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.213538 kubelet[2605]: W0912 17:38:34.213121 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.213538 kubelet[2605]: E0912 17:38:34.213138 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.213538 kubelet[2605]: E0912 17:38:34.213435 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.213538 kubelet[2605]: W0912 17:38:34.213448 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.213538 kubelet[2605]: E0912 17:38:34.213464 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.214187 kubelet[2605]: E0912 17:38:34.214165 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.214187 kubelet[2605]: W0912 17:38:34.214185 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.214312 kubelet[2605]: E0912 17:38:34.214203 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.295261 kubelet[2605]: E0912 17:38:34.295164 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.295261 kubelet[2605]: W0912 17:38:34.295199 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.295261 kubelet[2605]: E0912 17:38:34.295230 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.296950 kubelet[2605]: I0912 17:38:34.295275 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3a3ff327-190a-4ebe-85de-f1209e1870ef-registration-dir\") pod \"csi-node-driver-whntj\" (UID: \"3a3ff327-190a-4ebe-85de-f1209e1870ef\") " pod="calico-system/csi-node-driver-whntj" Sep 12 17:38:34.296950 kubelet[2605]: E0912 17:38:34.295816 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.296950 kubelet[2605]: W0912 17:38:34.295835 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.297284 kubelet[2605]: E0912 17:38:34.297253 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.297689 kubelet[2605]: I0912 17:38:34.297635 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3a3ff327-190a-4ebe-85de-f1209e1870ef-socket-dir\") pod \"csi-node-driver-whntj\" (UID: \"3a3ff327-190a-4ebe-85de-f1209e1870ef\") " pod="calico-system/csi-node-driver-whntj" Sep 12 17:38:34.297841 kubelet[2605]: E0912 17:38:34.297817 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.297910 kubelet[2605]: W0912 17:38:34.297844 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.299621 kubelet[2605]: E0912 17:38:34.298496 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.300270 kubelet[2605]: E0912 17:38:34.300238 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.300270 kubelet[2605]: W0912 17:38:34.300267 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.300419 kubelet[2605]: E0912 17:38:34.300298 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.300419 kubelet[2605]: I0912 17:38:34.300337 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a3ff327-190a-4ebe-85de-f1209e1870ef-kubelet-dir\") pod \"csi-node-driver-whntj\" (UID: \"3a3ff327-190a-4ebe-85de-f1209e1870ef\") " pod="calico-system/csi-node-driver-whntj" Sep 12 17:38:34.301746 kubelet[2605]: E0912 17:38:34.301716 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.301746 kubelet[2605]: W0912 17:38:34.301745 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.302119 kubelet[2605]: E0912 17:38:34.302065 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.302213 kubelet[2605]: E0912 17:38:34.302191 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.302278 kubelet[2605]: W0912 17:38:34.302237 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.302278 kubelet[2605]: E0912 17:38:34.302260 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.304112 kubelet[2605]: E0912 17:38:34.304081 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.304112 kubelet[2605]: W0912 17:38:34.304108 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.304266 kubelet[2605]: E0912 17:38:34.304224 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.307529 kubelet[2605]: E0912 17:38:34.306883 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.307529 kubelet[2605]: W0912 17:38:34.306906 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.307529 kubelet[2605]: E0912 17:38:34.306928 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.307529 kubelet[2605]: E0912 17:38:34.307335 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.307529 kubelet[2605]: W0912 17:38:34.307349 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.307529 kubelet[2605]: E0912 17:38:34.307400 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.307916 kubelet[2605]: E0912 17:38:34.307804 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.307916 kubelet[2605]: W0912 17:38:34.307819 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.307916 kubelet[2605]: E0912 17:38:34.307837 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.308086 kubelet[2605]: I0912 17:38:34.307908 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3a3ff327-190a-4ebe-85de-f1209e1870ef-varrun\") pod \"csi-node-driver-whntj\" (UID: \"3a3ff327-190a-4ebe-85de-f1209e1870ef\") " pod="calico-system/csi-node-driver-whntj" Sep 12 17:38:34.309710 kubelet[2605]: E0912 17:38:34.309678 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.309710 kubelet[2605]: W0912 17:38:34.309712 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.309885 kubelet[2605]: E0912 17:38:34.309738 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.310169 kubelet[2605]: E0912 17:38:34.310144 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.310169 kubelet[2605]: W0912 17:38:34.310167 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.310309 kubelet[2605]: E0912 17:38:34.310262 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.310598 kubelet[2605]: E0912 17:38:34.310575 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.310598 kubelet[2605]: W0912 17:38:34.310598 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.310766 kubelet[2605]: E0912 17:38:34.310616 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.311010 kubelet[2605]: E0912 17:38:34.310986 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.311010 kubelet[2605]: W0912 17:38:34.311009 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.311133 kubelet[2605]: E0912 17:38:34.311026 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.311133 kubelet[2605]: I0912 17:38:34.311065 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6g9k\" (UniqueName: \"kubernetes.io/projected/3a3ff327-190a-4ebe-85de-f1209e1870ef-kube-api-access-q6g9k\") pod \"csi-node-driver-whntj\" (UID: \"3a3ff327-190a-4ebe-85de-f1209e1870ef\") " pod="calico-system/csi-node-driver-whntj" Sep 12 17:38:34.312533 kubelet[2605]: E0912 17:38:34.311430 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.312533 kubelet[2605]: W0912 17:38:34.311449 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.312533 kubelet[2605]: E0912 17:38:34.311468 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.312533 kubelet[2605]: E0912 17:38:34.311820 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.312533 kubelet[2605]: W0912 17:38:34.311833 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.312533 kubelet[2605]: E0912 17:38:34.311850 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.412453 kubelet[2605]: E0912 17:38:34.412386 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.412453 kubelet[2605]: W0912 17:38:34.412417 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.412453 kubelet[2605]: E0912 17:38:34.412447 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.412908 kubelet[2605]: E0912 17:38:34.412879 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.412908 kubelet[2605]: W0912 17:38:34.412904 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.413076 kubelet[2605]: E0912 17:38:34.412927 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.414546 kubelet[2605]: E0912 17:38:34.413809 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.414546 kubelet[2605]: W0912 17:38:34.413839 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.414845 kubelet[2605]: E0912 17:38:34.414806 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.415218 kubelet[2605]: E0912 17:38:34.415198 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.415218 kubelet[2605]: W0912 17:38:34.415218 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.415364 kubelet[2605]: E0912 17:38:34.415260 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.416003 kubelet[2605]: E0912 17:38:34.415744 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.416003 kubelet[2605]: W0912 17:38:34.415761 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.416003 kubelet[2605]: E0912 17:38:34.415787 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.416499 kubelet[2605]: E0912 17:38:34.416474 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.416655 kubelet[2605]: W0912 17:38:34.416547 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.417001 kubelet[2605]: E0912 17:38:34.416835 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.417451 kubelet[2605]: E0912 17:38:34.417345 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.417451 kubelet[2605]: W0912 17:38:34.417361 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.417451 kubelet[2605]: E0912 17:38:34.417396 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.418152 kubelet[2605]: E0912 17:38:34.418018 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.418152 kubelet[2605]: W0912 17:38:34.418035 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.418567 kubelet[2605]: E0912 17:38:34.418302 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.418754 kubelet[2605]: E0912 17:38:34.418739 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.418996 kubelet[2605]: W0912 17:38:34.418860 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.418996 kubelet[2605]: E0912 17:38:34.418906 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.419604 kubelet[2605]: E0912 17:38:34.419364 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.419604 kubelet[2605]: W0912 17:38:34.419384 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.419604 kubelet[2605]: E0912 17:38:34.419405 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.420196 kubelet[2605]: E0912 17:38:34.420047 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.420196 kubelet[2605]: W0912 17:38:34.420064 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.420196 kubelet[2605]: E0912 17:38:34.420159 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.420903 kubelet[2605]: E0912 17:38:34.420677 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.420903 kubelet[2605]: W0912 17:38:34.420694 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.420903 kubelet[2605]: E0912 17:38:34.420795 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.421486 kubelet[2605]: E0912 17:38:34.421303 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.421486 kubelet[2605]: W0912 17:38:34.421318 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.421910 kubelet[2605]: E0912 17:38:34.421697 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.422108 kubelet[2605]: E0912 17:38:34.422003 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.422108 kubelet[2605]: W0912 17:38:34.422018 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.422474 kubelet[2605]: E0912 17:38:34.422249 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.422692 kubelet[2605]: E0912 17:38:34.422660 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.422904 kubelet[2605]: W0912 17:38:34.422796 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.423283 kubelet[2605]: E0912 17:38:34.423075 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.423441 kubelet[2605]: E0912 17:38:34.423428 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.423693 kubelet[2605]: W0912 17:38:34.423556 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.423834 kubelet[2605]: E0912 17:38:34.423801 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.424231 kubelet[2605]: E0912 17:38:34.424098 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.424231 kubelet[2605]: W0912 17:38:34.424113 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.424595 kubelet[2605]: E0912 17:38:34.424400 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.424776 kubelet[2605]: E0912 17:38:34.424741 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.424776 kubelet[2605]: W0912 17:38:34.424757 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.425090 kubelet[2605]: E0912 17:38:34.424920 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.425499 kubelet[2605]: E0912 17:38:34.425392 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.425499 kubelet[2605]: W0912 17:38:34.425409 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.425499 kubelet[2605]: E0912 17:38:34.425568 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.426391 kubelet[2605]: E0912 17:38:34.426084 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.426391 kubelet[2605]: W0912 17:38:34.426103 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.426391 kubelet[2605]: E0912 17:38:34.426198 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.426888 kubelet[2605]: E0912 17:38:34.426756 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.426888 kubelet[2605]: W0912 17:38:34.426774 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.427296 kubelet[2605]: E0912 17:38:34.427036 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.427618 kubelet[2605]: E0912 17:38:34.427451 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.427618 kubelet[2605]: W0912 17:38:34.427467 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.427938 kubelet[2605]: E0912 17:38:34.427751 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.428427 kubelet[2605]: E0912 17:38:34.428263 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.428427 kubelet[2605]: W0912 17:38:34.428279 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.428827 kubelet[2605]: E0912 17:38:34.428770 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.429088 kubelet[2605]: E0912 17:38:34.428961 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.429088 kubelet[2605]: W0912 17:38:34.428977 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.429563 kubelet[2605]: E0912 17:38:34.429237 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.430017 kubelet[2605]: E0912 17:38:34.429990 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.430317 kubelet[2605]: W0912 17:38:34.430151 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.431092 kubelet[2605]: E0912 17:38:34.430617 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.431339 kubelet[2605]: E0912 17:38:34.431240 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.431339 kubelet[2605]: W0912 17:38:34.431258 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.431339 kubelet[2605]: E0912 17:38:34.431277 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.450040 kubelet[2605]: E0912 17:38:34.449544 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.450040 kubelet[2605]: W0912 17:38:34.449602 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.450040 kubelet[2605]: E0912 17:38:34.449634 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.525029 kubelet[2605]: E0912 17:38:34.524821 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.525029 kubelet[2605]: W0912 17:38:34.524851 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.525029 kubelet[2605]: E0912 17:38:34.524877 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.626826 kubelet[2605]: E0912 17:38:34.626748 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.626826 kubelet[2605]: W0912 17:38:34.626817 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.627090 kubelet[2605]: E0912 17:38:34.626848 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.697481 kubelet[2605]: E0912 17:38:34.697404 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:34.697481 kubelet[2605]: W0912 17:38:34.697435 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:34.698344 kubelet[2605]: E0912 17:38:34.697576 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:34.986003 containerd[1456]: time="2025-09-12T17:38:34.985270022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4zs64,Uid:1e69b657-e032-468a-bbe2-3ce4180c52d2,Namespace:calico-system,Attempt:0,}" Sep 12 17:38:35.094773 containerd[1456]: time="2025-09-12T17:38:35.094579093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:35.095625 containerd[1456]: time="2025-09-12T17:38:35.094692778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:35.095625 containerd[1456]: time="2025-09-12T17:38:35.094720546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:35.095625 containerd[1456]: time="2025-09-12T17:38:35.094854038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:35.168292 systemd[1]: Started cri-containerd-9906a9af8ac08b61c1800bda34e4fbe22d5974be75fd3eaf71bac5b3c866483f.scope - libcontainer container 9906a9af8ac08b61c1800bda34e4fbe22d5974be75fd3eaf71bac5b3c866483f. Sep 12 17:38:35.270948 containerd[1456]: time="2025-09-12T17:38:35.270714145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4zs64,Uid:1e69b657-e032-468a-bbe2-3ce4180c52d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"9906a9af8ac08b61c1800bda34e4fbe22d5974be75fd3eaf71bac5b3c866483f\"" Sep 12 17:38:35.663432 kubelet[2605]: E0912 17:38:35.660872 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whntj" podUID="3a3ff327-190a-4ebe-85de-f1209e1870ef" Sep 12 17:38:35.804358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2186665251.mount: Deactivated successfully. Sep 12 17:38:36.722036 containerd[1456]: time="2025-09-12T17:38:36.721955155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:36.724192 containerd[1456]: time="2025-09-12T17:38:36.723776602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:38:36.727545 containerd[1456]: time="2025-09-12T17:38:36.725745076Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:36.729498 containerd[1456]: time="2025-09-12T17:38:36.729412203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:36.730494 containerd[1456]: time="2025-09-12T17:38:36.730451784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.633979021s" Sep 12 17:38:36.730664 containerd[1456]: time="2025-09-12T17:38:36.730637983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:38:36.732040 containerd[1456]: time="2025-09-12T17:38:36.731967121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:38:36.759813 containerd[1456]: time="2025-09-12T17:38:36.759759944Z" level=info msg="CreateContainer within sandbox \"1278cc796d6807d0e2c0106563e67e5929530ce1e76e53d0cfa8df843001793f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:38:36.784014 containerd[1456]: time="2025-09-12T17:38:36.783954787Z" level=info msg="CreateContainer within sandbox \"1278cc796d6807d0e2c0106563e67e5929530ce1e76e53d0cfa8df843001793f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"02a3ec7119224130495396288c277c666c586f6aa493c72af72d66f9ab0df712\"" Sep 12 17:38:36.785706 containerd[1456]: time="2025-09-12T17:38:36.785552488Z" level=info msg="StartContainer for \"02a3ec7119224130495396288c277c666c586f6aa493c72af72d66f9ab0df712\"" Sep 12 17:38:36.852056 systemd[1]: Started cri-containerd-02a3ec7119224130495396288c277c666c586f6aa493c72af72d66f9ab0df712.scope - libcontainer container 02a3ec7119224130495396288c277c666c586f6aa493c72af72d66f9ab0df712. Sep 12 17:38:36.935645 containerd[1456]: time="2025-09-12T17:38:36.935474807Z" level=info msg="StartContainer for \"02a3ec7119224130495396288c277c666c586f6aa493c72af72d66f9ab0df712\" returns successfully" Sep 12 17:38:37.660641 kubelet[2605]: E0912 17:38:37.660559 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whntj" podUID="3a3ff327-190a-4ebe-85de-f1209e1870ef" Sep 12 17:38:37.710376 containerd[1456]: time="2025-09-12T17:38:37.710299935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:37.712183 containerd[1456]: time="2025-09-12T17:38:37.712089575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:38:37.715536 containerd[1456]: time="2025-09-12T17:38:37.714090822Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:37.723206 containerd[1456]: time="2025-09-12T17:38:37.723126665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:37.725323 containerd[1456]: time="2025-09-12T17:38:37.725277693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 993.260073ms" Sep 12 17:38:37.725637 containerd[1456]: time="2025-09-12T17:38:37.725603628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:38:37.730213 containerd[1456]: time="2025-09-12T17:38:37.730155226Z" level=info msg="CreateContainer within sandbox \"9906a9af8ac08b61c1800bda34e4fbe22d5974be75fd3eaf71bac5b3c866483f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:38:37.757257 containerd[1456]: time="2025-09-12T17:38:37.757181447Z" level=info msg="CreateContainer within sandbox \"9906a9af8ac08b61c1800bda34e4fbe22d5974be75fd3eaf71bac5b3c866483f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"733bdee2a1014e936aeec807e67beaef4da975f8ea57f7f3a8441b5da7e9cc2d\"" Sep 12 17:38:37.759737 containerd[1456]: time="2025-09-12T17:38:37.758362548Z" level=info msg="StartContainer for \"733bdee2a1014e936aeec807e67beaef4da975f8ea57f7f3a8441b5da7e9cc2d\"" Sep 12 17:38:37.840143 systemd[1]: Started cri-containerd-733bdee2a1014e936aeec807e67beaef4da975f8ea57f7f3a8441b5da7e9cc2d.scope - libcontainer container 733bdee2a1014e936aeec807e67beaef4da975f8ea57f7f3a8441b5da7e9cc2d. Sep 12 17:38:37.846141 kubelet[2605]: E0912 17:38:37.845371 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.846141 kubelet[2605]: W0912 17:38:37.845402 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.846141 kubelet[2605]: E0912 17:38:37.845432 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.848914 kubelet[2605]: E0912 17:38:37.848859 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.848914 kubelet[2605]: W0912 17:38:37.848896 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.849718 kubelet[2605]: E0912 17:38:37.848927 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.850655 kubelet[2605]: E0912 17:38:37.850390 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.850655 kubelet[2605]: W0912 17:38:37.850411 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.850655 kubelet[2605]: E0912 17:38:37.850440 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.851879 kubelet[2605]: E0912 17:38:37.851616 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.851879 kubelet[2605]: W0912 17:38:37.851644 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.851879 kubelet[2605]: E0912 17:38:37.851673 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.853105 kubelet[2605]: E0912 17:38:37.852934 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.853105 kubelet[2605]: W0912 17:38:37.852953 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.853105 kubelet[2605]: E0912 17:38:37.853000 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.854208 kubelet[2605]: E0912 17:38:37.853859 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.854208 kubelet[2605]: W0912 17:38:37.853915 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.854208 kubelet[2605]: E0912 17:38:37.853939 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.855021 kubelet[2605]: E0912 17:38:37.854827 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.855021 kubelet[2605]: W0912 17:38:37.854865 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.855021 kubelet[2605]: E0912 17:38:37.854885 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.855951 kubelet[2605]: E0912 17:38:37.855838 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.855951 kubelet[2605]: W0912 17:38:37.855875 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.855951 kubelet[2605]: E0912 17:38:37.855893 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.857498 kubelet[2605]: E0912 17:38:37.856856 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.857498 kubelet[2605]: W0912 17:38:37.856893 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.857498 kubelet[2605]: E0912 17:38:37.856913 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.858265 kubelet[2605]: E0912 17:38:37.858060 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.858265 kubelet[2605]: W0912 17:38:37.858123 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.858265 kubelet[2605]: E0912 17:38:37.858166 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.859044 kubelet[2605]: E0912 17:38:37.858888 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.859044 kubelet[2605]: W0912 17:38:37.858907 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.859044 kubelet[2605]: E0912 17:38:37.858924 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.859809 kubelet[2605]: E0912 17:38:37.859629 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.859809 kubelet[2605]: W0912 17:38:37.859647 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.859809 kubelet[2605]: E0912 17:38:37.859664 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.860897 kubelet[2605]: E0912 17:38:37.860871 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.861222 kubelet[2605]: W0912 17:38:37.860916 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.861222 kubelet[2605]: E0912 17:38:37.860936 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.863168 kubelet[2605]: E0912 17:38:37.863130 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.863168 kubelet[2605]: W0912 17:38:37.863151 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.863168 kubelet[2605]: E0912 17:38:37.863170 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.864154 kubelet[2605]: E0912 17:38:37.864107 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.864378 kubelet[2605]: W0912 17:38:37.864175 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.864378 kubelet[2605]: E0912 17:38:37.864197 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.865798 kubelet[2605]: E0912 17:38:37.865771 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.865798 kubelet[2605]: W0912 17:38:37.865794 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.866059 kubelet[2605]: E0912 17:38:37.865812 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.867745 kubelet[2605]: E0912 17:38:37.867596 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.867745 kubelet[2605]: W0912 17:38:37.867619 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.867745 kubelet[2605]: E0912 17:38:37.867650 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.869763 kubelet[2605]: E0912 17:38:37.868262 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.869763 kubelet[2605]: W0912 17:38:37.868287 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.869763 kubelet[2605]: E0912 17:38:37.868612 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.869763 kubelet[2605]: W0912 17:38:37.868627 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.869763 kubelet[2605]: E0912 17:38:37.868901 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.869763 kubelet[2605]: W0912 17:38:37.868913 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.869763 kubelet[2605]: E0912 17:38:37.868930 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.869763 kubelet[2605]: E0912 17:38:37.869199 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.869763 kubelet[2605]: W0912 17:38:37.869212 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.869763 kubelet[2605]: E0912 17:38:37.869227 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.869763 kubelet[2605]: E0912 17:38:37.869565 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.869763 kubelet[2605]: W0912 17:38:37.869582 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.869763 kubelet[2605]: E0912 17:38:37.869600 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.870496 kubelet[2605]: E0912 17:38:37.869935 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.870496 kubelet[2605]: W0912 17:38:37.869949 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.870496 kubelet[2605]: E0912 17:38:37.869966 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.871903 kubelet[2605]: E0912 17:38:37.870563 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.871903 kubelet[2605]: W0912 17:38:37.870578 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.871903 kubelet[2605]: E0912 17:38:37.870594 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.871903 kubelet[2605]: E0912 17:38:37.870628 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.873262 kubelet[2605]: E0912 17:38:37.873196 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.875062 kubelet[2605]: E0912 17:38:37.875010 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.875062 kubelet[2605]: W0912 17:38:37.875034 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.875062 kubelet[2605]: E0912 17:38:37.875065 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.876268 kubelet[2605]: E0912 17:38:37.876240 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.876268 kubelet[2605]: W0912 17:38:37.876270 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.876615 kubelet[2605]: E0912 17:38:37.876556 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.877749 kubelet[2605]: E0912 17:38:37.877723 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.877749 kubelet[2605]: W0912 17:38:37.877748 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.877890 kubelet[2605]: E0912 17:38:37.877805 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.878738 kubelet[2605]: E0912 17:38:37.878708 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.878738 kubelet[2605]: W0912 17:38:37.878731 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.879028 kubelet[2605]: E0912 17:38:37.879001 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.879782 kubelet[2605]: E0912 17:38:37.879738 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.879782 kubelet[2605]: W0912 17:38:37.879766 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.880620 kubelet[2605]: E0912 17:38:37.880587 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.881790 kubelet[2605]: E0912 17:38:37.881289 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.881790 kubelet[2605]: W0912 17:38:37.881321 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.881790 kubelet[2605]: E0912 17:38:37.881404 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.883829 kubelet[2605]: E0912 17:38:37.883777 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.884073 kubelet[2605]: W0912 17:38:37.884017 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.885538 kubelet[2605]: E0912 17:38:37.884307 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.885936 kubelet[2605]: E0912 17:38:37.885918 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.886047 kubelet[2605]: W0912 17:38:37.886028 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.886165 kubelet[2605]: E0912 17:38:37.886149 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.887153 kubelet[2605]: E0912 17:38:37.887077 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:38:37.887153 kubelet[2605]: W0912 17:38:37.887096 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:38:37.887153 kubelet[2605]: E0912 17:38:37.887114 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:38:37.902491 kubelet[2605]: I0912 17:38:37.901259 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58b5dbb77b-rwfrw" podStartSLOduration=2.264847471 podStartE2EDuration="4.901222325s" podCreationTimestamp="2025-09-12 17:38:33 +0000 UTC" firstStartedPulling="2025-09-12 17:38:34.095375012 +0000 UTC m=+21.619174765" lastFinishedPulling="2025-09-12 17:38:36.731749848 +0000 UTC m=+24.255549619" observedRunningTime="2025-09-12 17:38:37.873012014 +0000 UTC m=+25.396811799" watchObservedRunningTime="2025-09-12 17:38:37.901222325 +0000 UTC m=+25.425022098" Sep 12 17:38:37.971856 containerd[1456]: time="2025-09-12T17:38:37.971788394Z" level=info msg="StartContainer for \"733bdee2a1014e936aeec807e67beaef4da975f8ea57f7f3a8441b5da7e9cc2d\" returns successfully" Sep 12 17:38:37.995294 systemd[1]: cri-containerd-733bdee2a1014e936aeec807e67beaef4da975f8ea57f7f3a8441b5da7e9cc2d.scope: Deactivated successfully. Sep 12 17:38:38.039610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-733bdee2a1014e936aeec807e67beaef4da975f8ea57f7f3a8441b5da7e9cc2d-rootfs.mount: Deactivated successfully. Sep 12 17:38:38.712457 containerd[1456]: time="2025-09-12T17:38:38.712237188Z" level=info msg="shim disconnected" id=733bdee2a1014e936aeec807e67beaef4da975f8ea57f7f3a8441b5da7e9cc2d namespace=k8s.io Sep 12 17:38:38.712457 containerd[1456]: time="2025-09-12T17:38:38.712340091Z" level=warning msg="cleaning up after shim disconnected" id=733bdee2a1014e936aeec807e67beaef4da975f8ea57f7f3a8441b5da7e9cc2d namespace=k8s.io Sep 12 17:38:38.712457 containerd[1456]: time="2025-09-12T17:38:38.712356837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:38:38.867825 containerd[1456]: time="2025-09-12T17:38:38.867453042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:38:39.660258 kubelet[2605]: E0912 17:38:39.660143 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whntj" podUID="3a3ff327-190a-4ebe-85de-f1209e1870ef" Sep 12 17:38:41.660673 kubelet[2605]: E0912 17:38:41.660193 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whntj" podUID="3a3ff327-190a-4ebe-85de-f1209e1870ef" Sep 12 17:38:42.274685 containerd[1456]: time="2025-09-12T17:38:42.273586099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:42.274685 containerd[1456]: time="2025-09-12T17:38:42.274615009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:38:42.276388 containerd[1456]: time="2025-09-12T17:38:42.276321784Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:42.279641 containerd[1456]: time="2025-09-12T17:38:42.279586872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:42.280861 containerd[1456]: time="2025-09-12T17:38:42.280697592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.413189515s" Sep 12 17:38:42.280861 containerd[1456]: time="2025-09-12T17:38:42.280743890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:38:42.285090 containerd[1456]: time="2025-09-12T17:38:42.285038230Z" level=info msg="CreateContainer within sandbox \"9906a9af8ac08b61c1800bda34e4fbe22d5974be75fd3eaf71bac5b3c866483f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:38:42.311816 containerd[1456]: time="2025-09-12T17:38:42.311747747Z" level=info msg="CreateContainer within sandbox \"9906a9af8ac08b61c1800bda34e4fbe22d5974be75fd3eaf71bac5b3c866483f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d6f4eba697a31e147fea214b8f8938776f715982fb2d79092c275a9687dc0c56\"" Sep 12 17:38:42.313041 containerd[1456]: time="2025-09-12T17:38:42.312843380Z" level=info msg="StartContainer for \"d6f4eba697a31e147fea214b8f8938776f715982fb2d79092c275a9687dc0c56\"" Sep 12 17:38:42.365316 systemd[1]: run-containerd-runc-k8s.io-d6f4eba697a31e147fea214b8f8938776f715982fb2d79092c275a9687dc0c56-runc.GaXP3L.mount: Deactivated successfully. Sep 12 17:38:42.373890 systemd[1]: Started cri-containerd-d6f4eba697a31e147fea214b8f8938776f715982fb2d79092c275a9687dc0c56.scope - libcontainer container d6f4eba697a31e147fea214b8f8938776f715982fb2d79092c275a9687dc0c56. Sep 12 17:38:42.438861 containerd[1456]: time="2025-09-12T17:38:42.438797784Z" level=info msg="StartContainer for \"d6f4eba697a31e147fea214b8f8938776f715982fb2d79092c275a9687dc0c56\" returns successfully" Sep 12 17:38:43.547990 containerd[1456]: time="2025-09-12T17:38:43.547925162Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Sep 12 17:38:43.550879 systemd[1]: cri-containerd-d6f4eba697a31e147fea214b8f8938776f715982fb2d79092c275a9687dc0c56.scope: Deactivated successfully. Sep 12 17:38:43.591168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6f4eba697a31e147fea214b8f8938776f715982fb2d79092c275a9687dc0c56-rootfs.mount: Deactivated successfully. Sep 12 17:38:43.618955 kubelet[2605]: I0912 17:38:43.618892 2605 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:38:43.678983 systemd[1]: Created slice kubepods-burstable-podc9918ac5_51bf_44cc_9226_aa00d6fecc77.slice - libcontainer container kubepods-burstable-podc9918ac5_51bf_44cc_9226_aa00d6fecc77.slice. Sep 12 17:38:43.694899 systemd[1]: Created slice kubepods-besteffort-pod3a3ff327_190a_4ebe_85de_f1209e1870ef.slice - libcontainer container kubepods-besteffort-pod3a3ff327_190a_4ebe_85de_f1209e1870ef.slice. Sep 12 17:38:43.730531 containerd[1456]: time="2025-09-12T17:38:43.728164648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whntj,Uid:3a3ff327-190a-4ebe-85de-f1209e1870ef,Namespace:calico-system,Attempt:0,}" Sep 12 17:38:43.736931 systemd[1]: Created slice kubepods-besteffort-pod54a13898_563c_4fc8_9cbb_281683965c07.slice - libcontainer container kubepods-besteffort-pod54a13898_563c_4fc8_9cbb_281683965c07.slice. Sep 12 17:38:43.753947 systemd[1]: Created slice kubepods-burstable-poddab6dd14_c5b2_47a3_9f8b_31765591695a.slice - libcontainer container kubepods-burstable-poddab6dd14_c5b2_47a3_9f8b_31765591695a.slice. Sep 12 17:38:43.775832 systemd[1]: Created slice kubepods-besteffort-poda81aa3ac_9352_4654_98c8_0683fd08fedb.slice - libcontainer container kubepods-besteffort-poda81aa3ac_9352_4654_98c8_0683fd08fedb.slice. Sep 12 17:38:43.790574 systemd[1]: Created slice kubepods-besteffort-pod5dc1035e_b888_4db3_8f54_9136957333b5.slice - libcontainer container kubepods-besteffort-pod5dc1035e_b888_4db3_8f54_9136957333b5.slice. Sep 12 17:38:43.804697 systemd[1]: Created slice kubepods-besteffort-pod97524985_31c6_450c_8df7_109ab8004a00.slice - libcontainer container kubepods-besteffort-pod97524985_31c6_450c_8df7_109ab8004a00.slice. Sep 12 17:38:43.818983 kubelet[2605]: I0912 17:38:43.818916 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rfzk\" (UniqueName: \"kubernetes.io/projected/6d036e19-1f2c-48c7-8f67-1bff02bce89d-kube-api-access-2rfzk\") pod \"goldmane-7988f88666-q5wmn\" (UID: \"6d036e19-1f2c-48c7-8f67-1bff02bce89d\") " pod="calico-system/goldmane-7988f88666-q5wmn" Sep 12 17:38:43.863944 kubelet[2605]: I0912 17:38:43.818994 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj8c4\" (UniqueName: \"kubernetes.io/projected/dab6dd14-c5b2-47a3-9f8b-31765591695a-kube-api-access-bj8c4\") pod \"coredns-7c65d6cfc9-425vx\" (UID: \"dab6dd14-c5b2-47a3-9f8b-31765591695a\") " pod="kube-system/coredns-7c65d6cfc9-425vx" Sep 12 17:38:43.863944 kubelet[2605]: I0912 17:38:43.819026 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a81aa3ac-9352-4654-98c8-0683fd08fedb-calico-apiserver-certs\") pod \"calico-apiserver-77ff955bcc-6mgcv\" (UID: \"a81aa3ac-9352-4654-98c8-0683fd08fedb\") " pod="calico-apiserver/calico-apiserver-77ff955bcc-6mgcv" Sep 12 17:38:43.863944 kubelet[2605]: I0912 17:38:43.819054 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9918ac5-51bf-44cc-9226-aa00d6fecc77-config-volume\") pod \"coredns-7c65d6cfc9-4768z\" (UID: \"c9918ac5-51bf-44cc-9226-aa00d6fecc77\") " pod="kube-system/coredns-7c65d6cfc9-4768z" Sep 12 17:38:43.863944 kubelet[2605]: I0912 17:38:43.819084 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d036e19-1f2c-48c7-8f67-1bff02bce89d-goldmane-ca-bundle\") pod \"goldmane-7988f88666-q5wmn\" (UID: \"6d036e19-1f2c-48c7-8f67-1bff02bce89d\") " pod="calico-system/goldmane-7988f88666-q5wmn" Sep 12 17:38:43.863944 kubelet[2605]: I0912 17:38:43.819121 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wwz7\" (UniqueName: \"kubernetes.io/projected/c9918ac5-51bf-44cc-9226-aa00d6fecc77-kube-api-access-5wwz7\") pod \"coredns-7c65d6cfc9-4768z\" (UID: \"c9918ac5-51bf-44cc-9226-aa00d6fecc77\") " pod="kube-system/coredns-7c65d6cfc9-4768z" Sep 12 17:38:43.863944 kubelet[2605]: I0912 17:38:43.819151 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dc1035e-b888-4db3-8f54-9136957333b5-tigera-ca-bundle\") pod \"calico-kube-controllers-76b5f5566b-zxw5v\" (UID: \"5dc1035e-b888-4db3-8f54-9136957333b5\") " pod="calico-system/calico-kube-controllers-76b5f5566b-zxw5v" Sep 12 17:38:43.863944 kubelet[2605]: I0912 17:38:43.819184 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97524985-31c6-450c-8df7-109ab8004a00-whisker-ca-bundle\") pod \"whisker-5bfdb4c46c-msrzz\" (UID: \"97524985-31c6-450c-8df7-109ab8004a00\") " pod="calico-system/whisker-5bfdb4c46c-msrzz" Sep 12 17:38:43.863944 kubelet[2605]: I0912 17:38:43.819214 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d036e19-1f2c-48c7-8f67-1bff02bce89d-config\") pod \"goldmane-7988f88666-q5wmn\" (UID: \"6d036e19-1f2c-48c7-8f67-1bff02bce89d\") " pod="calico-system/goldmane-7988f88666-q5wmn" Sep 12 17:38:43.863944 kubelet[2605]: I0912 17:38:43.819241 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6d036e19-1f2c-48c7-8f67-1bff02bce89d-goldmane-key-pair\") pod \"goldmane-7988f88666-q5wmn\" (UID: \"6d036e19-1f2c-48c7-8f67-1bff02bce89d\") " pod="calico-system/goldmane-7988f88666-q5wmn" Sep 12 17:38:43.863944 kubelet[2605]: I0912 17:38:43.819275 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/97524985-31c6-450c-8df7-109ab8004a00-whisker-backend-key-pair\") pod \"whisker-5bfdb4c46c-msrzz\" (UID: \"97524985-31c6-450c-8df7-109ab8004a00\") " pod="calico-system/whisker-5bfdb4c46c-msrzz" Sep 12 17:38:43.863944 kubelet[2605]: I0912 17:38:43.819303 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27cmb\" (UniqueName: \"kubernetes.io/projected/97524985-31c6-450c-8df7-109ab8004a00-kube-api-access-27cmb\") pod \"whisker-5bfdb4c46c-msrzz\" (UID: \"97524985-31c6-450c-8df7-109ab8004a00\") " pod="calico-system/whisker-5bfdb4c46c-msrzz" Sep 12 17:38:43.863944 kubelet[2605]: I0912 17:38:43.819367 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8c9v\" (UniqueName: \"kubernetes.io/projected/54a13898-563c-4fc8-9cbb-281683965c07-kube-api-access-d8c9v\") pod \"calico-apiserver-77ff955bcc-9474z\" (UID: \"54a13898-563c-4fc8-9cbb-281683965c07\") " pod="calico-apiserver/calico-apiserver-77ff955bcc-9474z" Sep 12 17:38:43.863944 kubelet[2605]: I0912 17:38:43.819402 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27fr5\" (UniqueName: \"kubernetes.io/projected/5dc1035e-b888-4db3-8f54-9136957333b5-kube-api-access-27fr5\") pod \"calico-kube-controllers-76b5f5566b-zxw5v\" (UID: \"5dc1035e-b888-4db3-8f54-9136957333b5\") " pod="calico-system/calico-kube-controllers-76b5f5566b-zxw5v" Sep 12 17:38:43.863944 kubelet[2605]: I0912 17:38:43.819458 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dab6dd14-c5b2-47a3-9f8b-31765591695a-config-volume\") pod \"coredns-7c65d6cfc9-425vx\" (UID: \"dab6dd14-c5b2-47a3-9f8b-31765591695a\") " pod="kube-system/coredns-7c65d6cfc9-425vx" Sep 12 17:38:43.826076 systemd[1]: Created slice kubepods-besteffort-pod6d036e19_1f2c_48c7_8f67_1bff02bce89d.slice - libcontainer container kubepods-besteffort-pod6d036e19_1f2c_48c7_8f67_1bff02bce89d.slice. Sep 12 17:38:43.867320 kubelet[2605]: I0912 17:38:43.821549 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pt8w\" (UniqueName: \"kubernetes.io/projected/a81aa3ac-9352-4654-98c8-0683fd08fedb-kube-api-access-9pt8w\") pod \"calico-apiserver-77ff955bcc-6mgcv\" (UID: \"a81aa3ac-9352-4654-98c8-0683fd08fedb\") " pod="calico-apiserver/calico-apiserver-77ff955bcc-6mgcv" Sep 12 17:38:43.867320 kubelet[2605]: I0912 17:38:43.821653 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/54a13898-563c-4fc8-9cbb-281683965c07-calico-apiserver-certs\") pod \"calico-apiserver-77ff955bcc-9474z\" (UID: \"54a13898-563c-4fc8-9cbb-281683965c07\") " pod="calico-apiserver/calico-apiserver-77ff955bcc-9474z" Sep 12 17:38:44.046286 containerd[1456]: time="2025-09-12T17:38:44.046212666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77ff955bcc-9474z,Uid:54a13898-563c-4fc8-9cbb-281683965c07,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:38:44.061866 containerd[1456]: time="2025-09-12T17:38:44.061715491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-425vx,Uid:dab6dd14-c5b2-47a3-9f8b-31765591695a,Namespace:kube-system,Attempt:0,}" Sep 12 17:38:44.086937 containerd[1456]: time="2025-09-12T17:38:44.086858302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77ff955bcc-6mgcv,Uid:a81aa3ac-9352-4654-98c8-0683fd08fedb,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:38:44.101988 containerd[1456]: time="2025-09-12T17:38:44.101913796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b5f5566b-zxw5v,Uid:5dc1035e-b888-4db3-8f54-9136957333b5,Namespace:calico-system,Attempt:0,}" Sep 12 17:38:44.117838 containerd[1456]: time="2025-09-12T17:38:44.117783526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bfdb4c46c-msrzz,Uid:97524985-31c6-450c-8df7-109ab8004a00,Namespace:calico-system,Attempt:0,}" Sep 12 17:38:44.144965 containerd[1456]: time="2025-09-12T17:38:44.144636770Z" level=info msg="shim disconnected" id=d6f4eba697a31e147fea214b8f8938776f715982fb2d79092c275a9687dc0c56 namespace=k8s.io Sep 12 17:38:44.144965 containerd[1456]: time="2025-09-12T17:38:44.144707132Z" level=warning msg="cleaning up after shim disconnected" id=d6f4eba697a31e147fea214b8f8938776f715982fb2d79092c275a9687dc0c56 namespace=k8s.io Sep 12 17:38:44.144965 containerd[1456]: time="2025-09-12T17:38:44.144721996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:38:44.167035 containerd[1456]: time="2025-09-12T17:38:44.166981255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-q5wmn,Uid:6d036e19-1f2c-48c7-8f67-1bff02bce89d,Namespace:calico-system,Attempt:0,}" Sep 12 17:38:44.304081 containerd[1456]: time="2025-09-12T17:38:44.304027664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4768z,Uid:c9918ac5-51bf-44cc-9226-aa00d6fecc77,Namespace:kube-system,Attempt:0,}" Sep 12 17:38:44.518736 containerd[1456]: time="2025-09-12T17:38:44.518660253Z" level=error msg="Failed to destroy network for sandbox \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.519538 containerd[1456]: time="2025-09-12T17:38:44.519395643Z" level=error msg="encountered an error cleaning up failed sandbox \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.519538 containerd[1456]: time="2025-09-12T17:38:44.519479896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whntj,Uid:3a3ff327-190a-4ebe-85de-f1209e1870ef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.520147 kubelet[2605]: E0912 17:38:44.520099 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.521062 kubelet[2605]: E0912 17:38:44.520565 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-whntj" Sep 12 17:38:44.521062 kubelet[2605]: E0912 17:38:44.520616 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-whntj" Sep 12 17:38:44.521062 kubelet[2605]: E0912 17:38:44.520704 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-whntj_calico-system(3a3ff327-190a-4ebe-85de-f1209e1870ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-whntj_calico-system(3a3ff327-190a-4ebe-85de-f1209e1870ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-whntj" podUID="3a3ff327-190a-4ebe-85de-f1209e1870ef" Sep 12 17:38:44.594647 containerd[1456]: time="2025-09-12T17:38:44.589825691Z" level=error msg="Failed to destroy network for sandbox \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.594647 containerd[1456]: time="2025-09-12T17:38:44.590348807Z" level=error msg="encountered an error cleaning up failed sandbox \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.594647 containerd[1456]: time="2025-09-12T17:38:44.590435331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-425vx,Uid:dab6dd14-c5b2-47a3-9f8b-31765591695a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.598930 kubelet[2605]: E0912 17:38:44.592232 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.598930 kubelet[2605]: E0912 17:38:44.592314 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-425vx" Sep 12 17:38:44.598930 kubelet[2605]: E0912 17:38:44.592348 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-425vx" Sep 12 17:38:44.598930 kubelet[2605]: E0912 17:38:44.592412 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-425vx_kube-system(dab6dd14-c5b2-47a3-9f8b-31765591695a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-425vx_kube-system(dab6dd14-c5b2-47a3-9f8b-31765591695a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-425vx" podUID="dab6dd14-c5b2-47a3-9f8b-31765591695a" Sep 12 17:38:44.652725 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663-shm.mount: Deactivated successfully. Sep 12 17:38:44.703119 containerd[1456]: time="2025-09-12T17:38:44.703055562Z" level=error msg="Failed to destroy network for sandbox \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.705547 containerd[1456]: time="2025-09-12T17:38:44.704839800Z" level=error msg="Failed to destroy network for sandbox \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.706698 containerd[1456]: time="2025-09-12T17:38:44.706651240Z" level=error msg="encountered an error cleaning up failed sandbox \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.708827 containerd[1456]: time="2025-09-12T17:38:44.708637735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77ff955bcc-9474z,Uid:54a13898-563c-4fc8-9cbb-281683965c07,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.709584 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf-shm.mount: Deactivated successfully. Sep 12 17:38:44.713805 containerd[1456]: time="2025-09-12T17:38:44.707767875Z" level=error msg="encountered an error cleaning up failed sandbox \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.713902 kubelet[2605]: E0912 17:38:44.709809 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.713902 kubelet[2605]: E0912 17:38:44.709892 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77ff955bcc-9474z" Sep 12 17:38:44.713902 kubelet[2605]: E0912 17:38:44.709944 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77ff955bcc-9474z" Sep 12 17:38:44.713902 kubelet[2605]: E0912 17:38:44.710006 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77ff955bcc-9474z_calico-apiserver(54a13898-563c-4fc8-9cbb-281683965c07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77ff955bcc-9474z_calico-apiserver(54a13898-563c-4fc8-9cbb-281683965c07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77ff955bcc-9474z" podUID="54a13898-563c-4fc8-9cbb-281683965c07" Sep 12 17:38:44.718697 containerd[1456]: time="2025-09-12T17:38:44.715220607Z" level=error msg="Failed to destroy network for sandbox \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.721283 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d-shm.mount: Deactivated successfully. Sep 12 17:38:44.721589 containerd[1456]: time="2025-09-12T17:38:44.717991715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77ff955bcc-6mgcv,Uid:a81aa3ac-9352-4654-98c8-0683fd08fedb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.724622 containerd[1456]: time="2025-09-12T17:38:44.718895874Z" level=error msg="encountered an error cleaning up failed sandbox \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.724622 containerd[1456]: time="2025-09-12T17:38:44.723341232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b5f5566b-zxw5v,Uid:5dc1035e-b888-4db3-8f54-9136957333b5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.728815 kubelet[2605]: E0912 17:38:44.727642 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.728815 kubelet[2605]: E0912 17:38:44.727755 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77ff955bcc-6mgcv" Sep 12 17:38:44.728815 kubelet[2605]: E0912 17:38:44.727790 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77ff955bcc-6mgcv" Sep 12 17:38:44.728815 kubelet[2605]: E0912 17:38:44.727853 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77ff955bcc-6mgcv_calico-apiserver(a81aa3ac-9352-4654-98c8-0683fd08fedb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77ff955bcc-6mgcv_calico-apiserver(a81aa3ac-9352-4654-98c8-0683fd08fedb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77ff955bcc-6mgcv" podUID="a81aa3ac-9352-4654-98c8-0683fd08fedb" Sep 12 17:38:44.729223 containerd[1456]: time="2025-09-12T17:38:44.728650511Z" level=error msg="Failed to destroy network for sandbox \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.736395 kubelet[2605]: E0912 17:38:44.730204 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.736395 kubelet[2605]: E0912 17:38:44.730278 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76b5f5566b-zxw5v" Sep 12 17:38:44.736395 kubelet[2605]: E0912 17:38:44.730307 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76b5f5566b-zxw5v" Sep 12 17:38:44.736395 kubelet[2605]: E0912 17:38:44.730364 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76b5f5566b-zxw5v_calico-system(5dc1035e-b888-4db3-8f54-9136957333b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76b5f5566b-zxw5v_calico-system(5dc1035e-b888-4db3-8f54-9136957333b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76b5f5566b-zxw5v" podUID="5dc1035e-b888-4db3-8f54-9136957333b5" Sep 12 17:38:44.736864 containerd[1456]: time="2025-09-12T17:38:44.735045069Z" level=error msg="encountered an error cleaning up failed sandbox \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.736864 containerd[1456]: time="2025-09-12T17:38:44.735140780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bfdb4c46c-msrzz,Uid:97524985-31c6-450c-8df7-109ab8004a00,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.731760 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614-shm.mount: Deactivated successfully. Sep 12 17:38:44.739947 kubelet[2605]: E0912 17:38:44.738638 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.739947 kubelet[2605]: E0912 17:38:44.738714 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bfdb4c46c-msrzz" Sep 12 17:38:44.739947 kubelet[2605]: E0912 17:38:44.738743 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bfdb4c46c-msrzz" Sep 12 17:38:44.739947 kubelet[2605]: E0912 17:38:44.738821 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5bfdb4c46c-msrzz_calico-system(97524985-31c6-450c-8df7-109ab8004a00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5bfdb4c46c-msrzz_calico-system(97524985-31c6-450c-8df7-109ab8004a00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5bfdb4c46c-msrzz" podUID="97524985-31c6-450c-8df7-109ab8004a00" Sep 12 17:38:44.756492 containerd[1456]: time="2025-09-12T17:38:44.756409693Z" level=error msg="Failed to destroy network for sandbox \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.758523 containerd[1456]: time="2025-09-12T17:38:44.758454422Z" level=error msg="encountered an error cleaning up failed sandbox \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.758675 containerd[1456]: time="2025-09-12T17:38:44.758558086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4768z,Uid:c9918ac5-51bf-44cc-9226-aa00d6fecc77,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.759020 kubelet[2605]: E0912 17:38:44.758900 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.759363 kubelet[2605]: E0912 17:38:44.759122 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4768z" Sep 12 17:38:44.759363 kubelet[2605]: E0912 17:38:44.759164 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4768z" Sep 12 17:38:44.759636 containerd[1456]: time="2025-09-12T17:38:44.759075089Z" level=error msg="Failed to destroy network for sandbox \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.759636 containerd[1456]: time="2025-09-12T17:38:44.759453608Z" level=error msg="encountered an error cleaning up failed sandbox \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.759756 containerd[1456]: time="2025-09-12T17:38:44.759610041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-q5wmn,Uid:6d036e19-1f2c-48c7-8f67-1bff02bce89d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.760481 kubelet[2605]: E0912 17:38:44.760235 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:44.760481 kubelet[2605]: E0912 17:38:44.760292 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-q5wmn" Sep 12 17:38:44.760481 kubelet[2605]: E0912 17:38:44.760317 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-q5wmn" Sep 12 17:38:44.760481 kubelet[2605]: E0912 17:38:44.760366 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-q5wmn_calico-system(6d036e19-1f2c-48c7-8f67-1bff02bce89d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-q5wmn_calico-system(6d036e19-1f2c-48c7-8f67-1bff02bce89d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-q5wmn" podUID="6d036e19-1f2c-48c7-8f67-1bff02bce89d" Sep 12 17:38:44.760481 kubelet[2605]: E0912 17:38:44.759495 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4768z_kube-system(c9918ac5-51bf-44cc-9226-aa00d6fecc77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4768z_kube-system(c9918ac5-51bf-44cc-9226-aa00d6fecc77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4768z" podUID="c9918ac5-51bf-44cc-9226-aa00d6fecc77" Sep 12 17:38:44.900992 containerd[1456]: time="2025-09-12T17:38:44.900829804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:38:44.903306 kubelet[2605]: I0912 17:38:44.903263 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Sep 12 17:38:44.907609 containerd[1456]: time="2025-09-12T17:38:44.907483210Z" level=info msg="StopPodSandbox for \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\"" Sep 12 17:38:44.908021 containerd[1456]: time="2025-09-12T17:38:44.907759732Z" level=info msg="Ensure that sandbox 4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc in task-service has been cleanup successfully" Sep 12 17:38:44.908535 kubelet[2605]: I0912 17:38:44.908446 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Sep 12 17:38:44.913801 containerd[1456]: time="2025-09-12T17:38:44.910127558Z" level=info msg="StopPodSandbox for \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\"" Sep 12 17:38:44.913801 containerd[1456]: time="2025-09-12T17:38:44.910393028Z" level=info msg="Ensure that sandbox 829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf in task-service has been cleanup successfully" Sep 12 17:38:44.914095 kubelet[2605]: I0912 17:38:44.913316 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Sep 12 17:38:44.916818 containerd[1456]: time="2025-09-12T17:38:44.916280237Z" level=info msg="StopPodSandbox for \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\"" Sep 12 17:38:44.918333 containerd[1456]: time="2025-09-12T17:38:44.918283401Z" level=info msg="Ensure that sandbox d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d in task-service has been cleanup successfully" Sep 12 17:38:44.918723 kubelet[2605]: I0912 17:38:44.918686 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Sep 12 17:38:44.920361 containerd[1456]: time="2025-09-12T17:38:44.920307194Z" level=info msg="StopPodSandbox for \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\"" Sep 12 17:38:44.921163 containerd[1456]: time="2025-09-12T17:38:44.920560682Z" level=info msg="Ensure that sandbox de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614 in task-service has been cleanup successfully" Sep 12 17:38:44.922839 kubelet[2605]: I0912 17:38:44.922803 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Sep 12 17:38:44.925669 containerd[1456]: time="2025-09-12T17:38:44.925608892Z" level=info msg="StopPodSandbox for \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\"" Sep 12 17:38:44.925917 containerd[1456]: time="2025-09-12T17:38:44.925884451Z" level=info msg="Ensure that sandbox 115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230 in task-service has been cleanup successfully" Sep 12 17:38:44.943220 kubelet[2605]: I0912 17:38:44.943156 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Sep 12 17:38:44.946598 containerd[1456]: time="2025-09-12T17:38:44.945594346Z" level=info msg="StopPodSandbox for \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\"" Sep 12 17:38:44.947953 containerd[1456]: time="2025-09-12T17:38:44.946953448Z" level=info msg="Ensure that sandbox 6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a in task-service has been cleanup successfully" Sep 12 17:38:44.966456 kubelet[2605]: I0912 17:38:44.966288 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Sep 12 17:38:44.981874 containerd[1456]: time="2025-09-12T17:38:44.981664900Z" level=info msg="StopPodSandbox for \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\"" Sep 12 17:38:44.983343 containerd[1456]: time="2025-09-12T17:38:44.983192005Z" level=info msg="Ensure that sandbox 4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb in task-service has been cleanup successfully" Sep 12 17:38:45.017340 kubelet[2605]: I0912 17:38:45.017302 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Sep 12 17:38:45.021154 containerd[1456]: time="2025-09-12T17:38:45.021092317Z" level=info msg="StopPodSandbox for \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\"" Sep 12 17:38:45.022241 containerd[1456]: time="2025-09-12T17:38:45.022158162Z" level=info msg="Ensure that sandbox b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663 in task-service has been cleanup successfully" Sep 12 17:38:45.097199 containerd[1456]: time="2025-09-12T17:38:45.096974980Z" level=error msg="StopPodSandbox for \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\" failed" error="failed to destroy network for sandbox \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:45.097467 kubelet[2605]: E0912 17:38:45.097315 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Sep 12 17:38:45.097467 kubelet[2605]: E0912 17:38:45.097398 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614"} Sep 12 17:38:45.097794 kubelet[2605]: E0912 17:38:45.097498 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5dc1035e-b888-4db3-8f54-9136957333b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:38:45.097794 kubelet[2605]: E0912 17:38:45.097557 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5dc1035e-b888-4db3-8f54-9136957333b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76b5f5566b-zxw5v" podUID="5dc1035e-b888-4db3-8f54-9136957333b5" Sep 12 17:38:45.196805 containerd[1456]: time="2025-09-12T17:38:45.196736742Z" level=error msg="StopPodSandbox for \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\" failed" error="failed to destroy network for sandbox \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:45.199032 kubelet[2605]: E0912 17:38:45.198972 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Sep 12 17:38:45.199213 kubelet[2605]: E0912 17:38:45.199054 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc"} Sep 12 17:38:45.199213 kubelet[2605]: E0912 17:38:45.199102 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9918ac5-51bf-44cc-9226-aa00d6fecc77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:38:45.199213 kubelet[2605]: E0912 17:38:45.199138 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9918ac5-51bf-44cc-9226-aa00d6fecc77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4768z" podUID="c9918ac5-51bf-44cc-9226-aa00d6fecc77" Sep 12 17:38:45.208981 containerd[1456]: time="2025-09-12T17:38:45.208910217Z" level=error msg="StopPodSandbox for \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\" failed" error="failed to destroy network for sandbox \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:45.209293 kubelet[2605]: E0912 17:38:45.209229 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Sep 12 17:38:45.209574 kubelet[2605]: E0912 17:38:45.209300 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf"} Sep 12 17:38:45.209574 kubelet[2605]: E0912 17:38:45.209354 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54a13898-563c-4fc8-9cbb-281683965c07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:38:45.209574 kubelet[2605]: E0912 17:38:45.209399 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54a13898-563c-4fc8-9cbb-281683965c07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77ff955bcc-9474z" podUID="54a13898-563c-4fc8-9cbb-281683965c07" Sep 12 17:38:45.210428 containerd[1456]: time="2025-09-12T17:38:45.209691305Z" level=error msg="StopPodSandbox for \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\" failed" error="failed to destroy network for sandbox \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:45.210741 kubelet[2605]: E0912 17:38:45.210697 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Sep 12 17:38:45.210994 kubelet[2605]: E0912 17:38:45.210761 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb"} Sep 12 17:38:45.210994 kubelet[2605]: E0912 17:38:45.210808 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a3ff327-190a-4ebe-85de-f1209e1870ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:38:45.210994 kubelet[2605]: E0912 17:38:45.210854 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a3ff327-190a-4ebe-85de-f1209e1870ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-whntj" podUID="3a3ff327-190a-4ebe-85de-f1209e1870ef" Sep 12 17:38:45.217082 containerd[1456]: time="2025-09-12T17:38:45.216668102Z" level=error msg="StopPodSandbox for \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\" failed" error="failed to destroy network for sandbox \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:45.217493 kubelet[2605]: E0912 17:38:45.217426 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Sep 12 17:38:45.217682 kubelet[2605]: E0912 17:38:45.217519 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663"} Sep 12 17:38:45.217682 kubelet[2605]: E0912 17:38:45.217567 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dab6dd14-c5b2-47a3-9f8b-31765591695a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:38:45.217682 kubelet[2605]: E0912 17:38:45.217609 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dab6dd14-c5b2-47a3-9f8b-31765591695a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-425vx" podUID="dab6dd14-c5b2-47a3-9f8b-31765591695a" Sep 12 17:38:45.220658 containerd[1456]: time="2025-09-12T17:38:45.220284935Z" level=error msg="StopPodSandbox for \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\" failed" error="failed to destroy network for sandbox \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:45.221111 kubelet[2605]: E0912 17:38:45.221027 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Sep 12 17:38:45.221272 kubelet[2605]: E0912 17:38:45.221147 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a"} Sep 12 17:38:45.221272 kubelet[2605]: E0912 17:38:45.221215 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6d036e19-1f2c-48c7-8f67-1bff02bce89d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:38:45.221686 kubelet[2605]: E0912 17:38:45.221259 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6d036e19-1f2c-48c7-8f67-1bff02bce89d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-q5wmn" podUID="6d036e19-1f2c-48c7-8f67-1bff02bce89d" Sep 12 17:38:45.224404 containerd[1456]: time="2025-09-12T17:38:45.223990781Z" level=error msg="StopPodSandbox for \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\" failed" error="failed to destroy network for sandbox \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:45.224639 kubelet[2605]: E0912 17:38:45.224434 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Sep 12 17:38:45.224639 kubelet[2605]: E0912 17:38:45.224535 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230"} Sep 12 17:38:45.224639 kubelet[2605]: E0912 17:38:45.224602 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"97524985-31c6-450c-8df7-109ab8004a00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:38:45.224918 kubelet[2605]: E0912 17:38:45.224641 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"97524985-31c6-450c-8df7-109ab8004a00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5bfdb4c46c-msrzz" podUID="97524985-31c6-450c-8df7-109ab8004a00" Sep 12 17:38:45.225405 containerd[1456]: time="2025-09-12T17:38:45.225278924Z" level=error msg="StopPodSandbox for \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\" failed" error="failed to destroy network for sandbox \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:38:45.225694 kubelet[2605]: E0912 17:38:45.225630 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Sep 12 17:38:45.225797 kubelet[2605]: E0912 17:38:45.225706 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d"} Sep 12 17:38:45.225797 kubelet[2605]: E0912 17:38:45.225770 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a81aa3ac-9352-4654-98c8-0683fd08fedb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:38:45.225974 kubelet[2605]: E0912 17:38:45.225841 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a81aa3ac-9352-4654-98c8-0683fd08fedb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77ff955bcc-6mgcv" podUID="a81aa3ac-9352-4654-98c8-0683fd08fedb" Sep 12 17:38:45.591390 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc-shm.mount: Deactivated successfully. Sep 12 17:38:45.591677 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a-shm.mount: Deactivated successfully. Sep 12 17:38:45.591957 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230-shm.mount: Deactivated successfully. Sep 12 17:38:52.558725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2154684957.mount: Deactivated successfully. Sep 12 17:38:52.590403 containerd[1456]: time="2025-09-12T17:38:52.590323238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:52.592306 containerd[1456]: time="2025-09-12T17:38:52.592014265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:38:52.595533 containerd[1456]: time="2025-09-12T17:38:52.593873712Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:52.598574 containerd[1456]: time="2025-09-12T17:38:52.598526768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:52.601547 containerd[1456]: time="2025-09-12T17:38:52.599309837Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.698415405s" Sep 12 17:38:52.601547 containerd[1456]: time="2025-09-12T17:38:52.599360121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:38:52.626116 containerd[1456]: time="2025-09-12T17:38:52.625754549Z" level=info msg="CreateContainer within sandbox \"9906a9af8ac08b61c1800bda34e4fbe22d5974be75fd3eaf71bac5b3c866483f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:38:52.655256 containerd[1456]: time="2025-09-12T17:38:52.655187062Z" level=info msg="CreateContainer within sandbox \"9906a9af8ac08b61c1800bda34e4fbe22d5974be75fd3eaf71bac5b3c866483f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"19130fc3f580e4aadb0034882851c5a3509d6e3cfb76203f053e7b0c5133b499\"" Sep 12 17:38:52.657279 containerd[1456]: time="2025-09-12T17:38:52.656234387Z" level=info msg="StartContainer for \"19130fc3f580e4aadb0034882851c5a3509d6e3cfb76203f053e7b0c5133b499\"" Sep 12 17:38:52.709787 systemd[1]: Started cri-containerd-19130fc3f580e4aadb0034882851c5a3509d6e3cfb76203f053e7b0c5133b499.scope - libcontainer container 19130fc3f580e4aadb0034882851c5a3509d6e3cfb76203f053e7b0c5133b499. Sep 12 17:38:52.763262 containerd[1456]: time="2025-09-12T17:38:52.763203503Z" level=info msg="StartContainer for \"19130fc3f580e4aadb0034882851c5a3509d6e3cfb76203f053e7b0c5133b499\" returns successfully" Sep 12 17:38:52.910734 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:38:52.910920 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:38:53.081820 containerd[1456]: time="2025-09-12T17:38:53.081744965Z" level=info msg="StopPodSandbox for \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\"" Sep 12 17:38:53.198477 kubelet[2605]: I0912 17:38:53.198211 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4zs64" podStartSLOduration=2.869592917 podStartE2EDuration="20.198175145s" podCreationTimestamp="2025-09-12 17:38:33 +0000 UTC" firstStartedPulling="2025-09-12 17:38:35.274151897 +0000 UTC m=+22.797951652" lastFinishedPulling="2025-09-12 17:38:52.602734117 +0000 UTC m=+40.126533880" observedRunningTime="2025-09-12 17:38:53.125543641 +0000 UTC m=+40.649343413" watchObservedRunningTime="2025-09-12 17:38:53.198175145 +0000 UTC m=+40.721974918" Sep 12 17:38:53.286477 containerd[1456]: 2025-09-12 17:38:53.199 [INFO][3815] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Sep 12 17:38:53.286477 containerd[1456]: 2025-09-12 17:38:53.200 [INFO][3815] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" iface="eth0" netns="/var/run/netns/cni-4930971b-ead8-d348-d0d7-81566c32b36a" Sep 12 17:38:53.286477 containerd[1456]: 2025-09-12 17:38:53.201 [INFO][3815] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" iface="eth0" netns="/var/run/netns/cni-4930971b-ead8-d348-d0d7-81566c32b36a" Sep 12 17:38:53.286477 containerd[1456]: 2025-09-12 17:38:53.201 [INFO][3815] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" iface="eth0" netns="/var/run/netns/cni-4930971b-ead8-d348-d0d7-81566c32b36a" Sep 12 17:38:53.286477 containerd[1456]: 2025-09-12 17:38:53.201 [INFO][3815] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Sep 12 17:38:53.286477 containerd[1456]: 2025-09-12 17:38:53.201 [INFO][3815] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Sep 12 17:38:53.286477 containerd[1456]: 2025-09-12 17:38:53.263 [INFO][3832] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" HandleID="k8s-pod-network.115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--5bfdb4c46c--msrzz-eth0" Sep 12 17:38:53.286477 containerd[1456]: 2025-09-12 17:38:53.264 [INFO][3832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:53.286477 containerd[1456]: 2025-09-12 17:38:53.264 [INFO][3832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:53.286477 containerd[1456]: 2025-09-12 17:38:53.276 [WARNING][3832] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" HandleID="k8s-pod-network.115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--5bfdb4c46c--msrzz-eth0" Sep 12 17:38:53.286477 containerd[1456]: 2025-09-12 17:38:53.276 [INFO][3832] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" HandleID="k8s-pod-network.115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--5bfdb4c46c--msrzz-eth0" Sep 12 17:38:53.286477 containerd[1456]: 2025-09-12 17:38:53.279 [INFO][3832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:53.286477 containerd[1456]: 2025-09-12 17:38:53.283 [INFO][3815] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Sep 12 17:38:53.286477 containerd[1456]: time="2025-09-12T17:38:53.285870957Z" level=info msg="TearDown network for sandbox \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\" successfully" Sep 12 17:38:53.286477 containerd[1456]: time="2025-09-12T17:38:53.285967578Z" level=info msg="StopPodSandbox for \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\" returns successfully" Sep 12 17:38:53.412544 kubelet[2605]: I0912 17:38:53.411253 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/97524985-31c6-450c-8df7-109ab8004a00-whisker-backend-key-pair\") pod \"97524985-31c6-450c-8df7-109ab8004a00\" (UID: \"97524985-31c6-450c-8df7-109ab8004a00\") " Sep 12 17:38:53.412544 kubelet[2605]: I0912 17:38:53.411336 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97524985-31c6-450c-8df7-109ab8004a00-whisker-ca-bundle\") pod \"97524985-31c6-450c-8df7-109ab8004a00\" (UID: \"97524985-31c6-450c-8df7-109ab8004a00\") " Sep 12 17:38:53.412544 kubelet[2605]: I0912 17:38:53.411382 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27cmb\" (UniqueName: \"kubernetes.io/projected/97524985-31c6-450c-8df7-109ab8004a00-kube-api-access-27cmb\") pod \"97524985-31c6-450c-8df7-109ab8004a00\" (UID: \"97524985-31c6-450c-8df7-109ab8004a00\") " Sep 12 17:38:53.419039 kubelet[2605]: I0912 17:38:53.418175 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97524985-31c6-450c-8df7-109ab8004a00-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "97524985-31c6-450c-8df7-109ab8004a00" (UID: "97524985-31c6-450c-8df7-109ab8004a00"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:38:53.420411 kubelet[2605]: I0912 17:38:53.420378 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97524985-31c6-450c-8df7-109ab8004a00-kube-api-access-27cmb" (OuterVolumeSpecName: "kube-api-access-27cmb") pod "97524985-31c6-450c-8df7-109ab8004a00" (UID: "97524985-31c6-450c-8df7-109ab8004a00"). InnerVolumeSpecName "kube-api-access-27cmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:38:53.420781 kubelet[2605]: I0912 17:38:53.420726 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97524985-31c6-450c-8df7-109ab8004a00-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "97524985-31c6-450c-8df7-109ab8004a00" (UID: "97524985-31c6-450c-8df7-109ab8004a00"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:38:53.512468 kubelet[2605]: I0912 17:38:53.512417 2605 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/97524985-31c6-450c-8df7-109ab8004a00-whisker-backend-key-pair\") on node \"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" DevicePath \"\"" Sep 12 17:38:53.512468 kubelet[2605]: I0912 17:38:53.512465 2605 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97524985-31c6-450c-8df7-109ab8004a00-whisker-ca-bundle\") on node \"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" DevicePath \"\"" Sep 12 17:38:53.512468 kubelet[2605]: I0912 17:38:53.512481 2605 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27cmb\" (UniqueName: \"kubernetes.io/projected/97524985-31c6-450c-8df7-109ab8004a00-kube-api-access-27cmb\") on node \"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal\" DevicePath \"\"" Sep 12 17:38:53.557355 systemd[1]: run-netns-cni\x2d4930971b\x2dead8\x2dd348\x2dd0d7\x2d81566c32b36a.mount: Deactivated successfully. Sep 12 17:38:53.557838 systemd[1]: var-lib-kubelet-pods-97524985\x2d31c6\x2d450c\x2d8df7\x2d109ab8004a00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d27cmb.mount: Deactivated successfully. Sep 12 17:38:53.557974 systemd[1]: var-lib-kubelet-pods-97524985\x2d31c6\x2d450c\x2d8df7\x2d109ab8004a00-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:38:54.067420 systemd[1]: Removed slice kubepods-besteffort-pod97524985_31c6_450c_8df7_109ab8004a00.slice - libcontainer container kubepods-besteffort-pod97524985_31c6_450c_8df7_109ab8004a00.slice. Sep 12 17:38:54.181207 systemd[1]: Created slice kubepods-besteffort-podf82e5d67_ef02_4bc4_a41a_1fb437846881.slice - libcontainer container kubepods-besteffort-podf82e5d67_ef02_4bc4_a41a_1fb437846881.slice. Sep 12 17:38:54.318755 kubelet[2605]: I0912 17:38:54.318389 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f82e5d67-ef02-4bc4-a41a-1fb437846881-whisker-ca-bundle\") pod \"whisker-58fb5c669d-9hcnz\" (UID: \"f82e5d67-ef02-4bc4-a41a-1fb437846881\") " pod="calico-system/whisker-58fb5c669d-9hcnz" Sep 12 17:38:54.318755 kubelet[2605]: I0912 17:38:54.318475 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7g5q\" (UniqueName: \"kubernetes.io/projected/f82e5d67-ef02-4bc4-a41a-1fb437846881-kube-api-access-q7g5q\") pod \"whisker-58fb5c669d-9hcnz\" (UID: \"f82e5d67-ef02-4bc4-a41a-1fb437846881\") " pod="calico-system/whisker-58fb5c669d-9hcnz" Sep 12 17:38:54.318755 kubelet[2605]: I0912 17:38:54.318591 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f82e5d67-ef02-4bc4-a41a-1fb437846881-whisker-backend-key-pair\") pod \"whisker-58fb5c669d-9hcnz\" (UID: \"f82e5d67-ef02-4bc4-a41a-1fb437846881\") " pod="calico-system/whisker-58fb5c669d-9hcnz" Sep 12 17:38:54.490190 containerd[1456]: time="2025-09-12T17:38:54.490131376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58fb5c669d-9hcnz,Uid:f82e5d67-ef02-4bc4-a41a-1fb437846881,Namespace:calico-system,Attempt:0,}" Sep 12 17:38:54.669737 kubelet[2605]: I0912 17:38:54.669572 2605 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97524985-31c6-450c-8df7-109ab8004a00" path="/var/lib/kubelet/pods/97524985-31c6-450c-8df7-109ab8004a00/volumes" Sep 12 17:38:54.705826 systemd-networkd[1368]: cali6757fc80113: Link UP Sep 12 17:38:54.707212 systemd-networkd[1368]: cali6757fc80113: Gained carrier Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.549 [INFO][3885] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.572 [INFO][3885] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--58fb5c669d--9hcnz-eth0 whisker-58fb5c669d- calico-system f82e5d67-ef02-4bc4-a41a-1fb437846881 894 0 2025-09-12 17:38:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58fb5c669d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal whisker-58fb5c669d-9hcnz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6757fc80113 [] [] }} ContainerID="c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" Namespace="calico-system" Pod="whisker-58fb5c669d-9hcnz" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--58fb5c669d--9hcnz-" Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.573 [INFO][3885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" Namespace="calico-system" Pod="whisker-58fb5c669d-9hcnz" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--58fb5c669d--9hcnz-eth0" Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.614 [INFO][3898] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" HandleID="k8s-pod-network.c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--58fb5c669d--9hcnz-eth0" Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.614 [INFO][3898] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" HandleID="k8s-pod-network.c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--58fb5c669d--9hcnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000494e60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", "pod":"whisker-58fb5c669d-9hcnz", "timestamp":"2025-09-12 17:38:54.614647089 +0000 UTC"}, Hostname:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.615 [INFO][3898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.615 [INFO][3898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.615 [INFO][3898] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal' Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.629 [INFO][3898] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.636 [INFO][3898] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.644 [INFO][3898] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.647 [INFO][3898] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.651 [INFO][3898] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.651 [INFO][3898] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.654 [INFO][3898] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.663 [INFO][3898] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.675 [INFO][3898] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.65/26] block=192.168.59.64/26 handle="k8s-pod-network.c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.675 [INFO][3898] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.65/26] handle="k8s-pod-network.c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.675 [INFO][3898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:54.750738 containerd[1456]: 2025-09-12 17:38:54.676 [INFO][3898] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.65/26] IPv6=[] ContainerID="c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" HandleID="k8s-pod-network.c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--58fb5c669d--9hcnz-eth0" Sep 12 17:38:54.755291 containerd[1456]: 2025-09-12 17:38:54.682 [INFO][3885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" Namespace="calico-system" Pod="whisker-58fb5c669d-9hcnz" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--58fb5c669d--9hcnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--58fb5c669d--9hcnz-eth0", GenerateName:"whisker-58fb5c669d-", Namespace:"calico-system", SelfLink:"", UID:"f82e5d67-ef02-4bc4-a41a-1fb437846881", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58fb5c669d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-58fb5c669d-9hcnz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6757fc80113", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:54.755291 containerd[1456]: 2025-09-12 17:38:54.682 [INFO][3885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.65/32] ContainerID="c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" Namespace="calico-system" Pod="whisker-58fb5c669d-9hcnz" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--58fb5c669d--9hcnz-eth0" Sep 12 17:38:54.755291 containerd[1456]: 2025-09-12 17:38:54.682 [INFO][3885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6757fc80113 ContainerID="c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" Namespace="calico-system" Pod="whisker-58fb5c669d-9hcnz" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--58fb5c669d--9hcnz-eth0" Sep 12 17:38:54.755291 containerd[1456]: 2025-09-12 17:38:54.709 [INFO][3885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" Namespace="calico-system" Pod="whisker-58fb5c669d-9hcnz" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--58fb5c669d--9hcnz-eth0" Sep 12 17:38:54.755291 containerd[1456]: 2025-09-12 17:38:54.710 [INFO][3885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" Namespace="calico-system" Pod="whisker-58fb5c669d-9hcnz" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--58fb5c669d--9hcnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--58fb5c669d--9hcnz-eth0", GenerateName:"whisker-58fb5c669d-", Namespace:"calico-system", SelfLink:"", UID:"f82e5d67-ef02-4bc4-a41a-1fb437846881", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58fb5c669d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c", Pod:"whisker-58fb5c669d-9hcnz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6757fc80113", MAC:"6e:06:8e:ac:e3:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:54.755291 containerd[1456]: 2025-09-12 17:38:54.740 [INFO][3885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c" Namespace="calico-system" Pod="whisker-58fb5c669d-9hcnz" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--58fb5c669d--9hcnz-eth0" Sep 12 17:38:54.811494 containerd[1456]: time="2025-09-12T17:38:54.811344726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:54.811810 containerd[1456]: time="2025-09-12T17:38:54.811498441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:54.811810 containerd[1456]: time="2025-09-12T17:38:54.811585572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:54.812572 containerd[1456]: time="2025-09-12T17:38:54.811950460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:54.874659 systemd[1]: Started cri-containerd-c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c.scope - libcontainer container c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c. Sep 12 17:38:55.026545 containerd[1456]: time="2025-09-12T17:38:55.025833377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58fb5c669d-9hcnz,Uid:f82e5d67-ef02-4bc4-a41a-1fb437846881,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c\"" Sep 12 17:38:55.037038 containerd[1456]: time="2025-09-12T17:38:55.034975432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:38:55.545610 kernel: bpftool[4070]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:38:55.666636 containerd[1456]: time="2025-09-12T17:38:55.666575709Z" level=info msg="StopPodSandbox for \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\"" Sep 12 17:38:55.839936 containerd[1456]: 2025-09-12 17:38:55.761 [INFO][4080] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Sep 12 17:38:55.839936 containerd[1456]: 2025-09-12 17:38:55.762 [INFO][4080] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" iface="eth0" netns="/var/run/netns/cni-40e9e7d1-d603-9c8d-e595-7a7e29df72a0" Sep 12 17:38:55.839936 containerd[1456]: 2025-09-12 17:38:55.764 [INFO][4080] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" iface="eth0" netns="/var/run/netns/cni-40e9e7d1-d603-9c8d-e595-7a7e29df72a0" Sep 12 17:38:55.839936 containerd[1456]: 2025-09-12 17:38:55.767 [INFO][4080] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" iface="eth0" netns="/var/run/netns/cni-40e9e7d1-d603-9c8d-e595-7a7e29df72a0" Sep 12 17:38:55.839936 containerd[1456]: 2025-09-12 17:38:55.767 [INFO][4080] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Sep 12 17:38:55.839936 containerd[1456]: 2025-09-12 17:38:55.767 [INFO][4080] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Sep 12 17:38:55.839936 containerd[1456]: 2025-09-12 17:38:55.814 [INFO][4088] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" HandleID="k8s-pod-network.b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:38:55.839936 containerd[1456]: 2025-09-12 17:38:55.815 [INFO][4088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:55.839936 containerd[1456]: 2025-09-12 17:38:55.815 [INFO][4088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:55.839936 containerd[1456]: 2025-09-12 17:38:55.828 [WARNING][4088] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" HandleID="k8s-pod-network.b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:38:55.839936 containerd[1456]: 2025-09-12 17:38:55.828 [INFO][4088] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" HandleID="k8s-pod-network.b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:38:55.839936 containerd[1456]: 2025-09-12 17:38:55.834 [INFO][4088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:55.839936 containerd[1456]: 2025-09-12 17:38:55.836 [INFO][4080] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Sep 12 17:38:55.841906 containerd[1456]: time="2025-09-12T17:38:55.841297881Z" level=info msg="TearDown network for sandbox \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\" successfully" Sep 12 17:38:55.841906 containerd[1456]: time="2025-09-12T17:38:55.841344463Z" level=info msg="StopPodSandbox for \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\" returns successfully" Sep 12 17:38:55.847555 containerd[1456]: time="2025-09-12T17:38:55.847402524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-425vx,Uid:dab6dd14-c5b2-47a3-9f8b-31765591695a,Namespace:kube-system,Attempt:1,}" Sep 12 17:38:55.848449 systemd[1]: run-netns-cni\x2d40e9e7d1\x2dd603\x2d9c8d\x2de595\x2d7a7e29df72a0.mount: Deactivated successfully. Sep 12 17:38:56.066975 systemd-networkd[1368]: vxlan.calico: Link UP Sep 12 17:38:56.066988 systemd-networkd[1368]: vxlan.calico: Gained carrier Sep 12 17:38:56.152579 systemd-networkd[1368]: calif2a2a19aa13: Link UP Sep 12 17:38:56.152983 systemd-networkd[1368]: calif2a2a19aa13: Gained carrier Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:55.963 [INFO][4107] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0 coredns-7c65d6cfc9- kube-system dab6dd14-c5b2-47a3-9f8b-31765591695a 903 0 2025-09-12 17:38:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal coredns-7c65d6cfc9-425vx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif2a2a19aa13 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" Namespace="kube-system" Pod="coredns-7c65d6cfc9-425vx" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-" Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:55.963 [INFO][4107] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" Namespace="kube-system" Pod="coredns-7c65d6cfc9-425vx" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.027 [INFO][4122] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" HandleID="k8s-pod-network.f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.028 [INFO][4122] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" HandleID="k8s-pod-network.f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", "pod":"coredns-7c65d6cfc9-425vx", "timestamp":"2025-09-12 17:38:56.027355443 +0000 UTC"}, Hostname:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.028 [INFO][4122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.028 [INFO][4122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.028 [INFO][4122] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal' Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.051 [INFO][4122] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.064 [INFO][4122] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.076 [INFO][4122] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.081 [INFO][4122] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.087 [INFO][4122] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.087 [INFO][4122] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.090 [INFO][4122] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096 Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.109 [INFO][4122] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.131 [INFO][4122] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.66/26] block=192.168.59.64/26 handle="k8s-pod-network.f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.132 [INFO][4122] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.66/26] handle="k8s-pod-network.f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.132 [INFO][4122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:56.203465 containerd[1456]: 2025-09-12 17:38:56.132 [INFO][4122] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.66/26] IPv6=[] ContainerID="f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" HandleID="k8s-pod-network.f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:38:56.205552 containerd[1456]: 2025-09-12 17:38:56.136 [INFO][4107] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" Namespace="kube-system" Pod="coredns-7c65d6cfc9-425vx" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"dab6dd14-c5b2-47a3-9f8b-31765591695a", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7c65d6cfc9-425vx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2a2a19aa13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:56.205552 containerd[1456]: 2025-09-12 17:38:56.136 [INFO][4107] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.66/32] ContainerID="f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" Namespace="kube-system" Pod="coredns-7c65d6cfc9-425vx" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:38:56.205552 containerd[1456]: 2025-09-12 17:38:56.136 [INFO][4107] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2a2a19aa13 ContainerID="f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" Namespace="kube-system" Pod="coredns-7c65d6cfc9-425vx" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:38:56.205552 containerd[1456]: 2025-09-12 17:38:56.156 [INFO][4107] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" Namespace="kube-system" Pod="coredns-7c65d6cfc9-425vx" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:38:56.205552 containerd[1456]: 2025-09-12 17:38:56.162 [INFO][4107] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" Namespace="kube-system" Pod="coredns-7c65d6cfc9-425vx" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"dab6dd14-c5b2-47a3-9f8b-31765591695a", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096", Pod:"coredns-7c65d6cfc9-425vx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2a2a19aa13", MAC:"8e:c1:f3:d7:cf:26", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:56.205552 containerd[1456]: 2025-09-12 17:38:56.190 [INFO][4107] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096" Namespace="kube-system" Pod="coredns-7c65d6cfc9-425vx" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:38:56.210416 systemd-networkd[1368]: cali6757fc80113: Gained IPv6LL Sep 12 17:38:56.304284 containerd[1456]: time="2025-09-12T17:38:56.303843189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:56.304646 containerd[1456]: time="2025-09-12T17:38:56.304592762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:56.305751 containerd[1456]: time="2025-09-12T17:38:56.305518154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:56.306129 containerd[1456]: time="2025-09-12T17:38:56.306012184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:56.383809 systemd[1]: Started cri-containerd-f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096.scope - libcontainer container f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096. Sep 12 17:38:56.538143 containerd[1456]: time="2025-09-12T17:38:56.538059507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:56.541663 containerd[1456]: time="2025-09-12T17:38:56.541590557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:38:56.545533 containerd[1456]: time="2025-09-12T17:38:56.544446042Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:56.551592 containerd[1456]: time="2025-09-12T17:38:56.551477970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:56.554774 containerd[1456]: time="2025-09-12T17:38:56.554721515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.519654628s" Sep 12 17:38:56.555625 containerd[1456]: time="2025-09-12T17:38:56.555591114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:38:56.562029 containerd[1456]: time="2025-09-12T17:38:56.561976882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-425vx,Uid:dab6dd14-c5b2-47a3-9f8b-31765591695a,Namespace:kube-system,Attempt:1,} returns sandbox id \"f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096\"" Sep 12 17:38:56.567211 containerd[1456]: time="2025-09-12T17:38:56.567163373Z" level=info msg="CreateContainer within sandbox \"c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:38:56.569249 containerd[1456]: time="2025-09-12T17:38:56.569201008Z" level=info msg="CreateContainer within sandbox \"f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:38:56.599008 containerd[1456]: time="2025-09-12T17:38:56.598948275Z" level=info msg="CreateContainer within sandbox \"c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"90f34dfc78dcd7c6c6f04115efed183bc24fa13a67b716b88419b604d49e5baf\"" Sep 12 17:38:56.601845 containerd[1456]: time="2025-09-12T17:38:56.600975466Z" level=info msg="StartContainer for \"90f34dfc78dcd7c6c6f04115efed183bc24fa13a67b716b88419b604d49e5baf\"" Sep 12 17:38:56.619367 containerd[1456]: time="2025-09-12T17:38:56.619299905Z" level=info msg="CreateContainer within sandbox \"f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c8e83ba706a39b6192cf665b0bc7e44f54d8e28c345454fab3b9fb899b5b532\"" Sep 12 17:38:56.622839 containerd[1456]: time="2025-09-12T17:38:56.622758169Z" level=info msg="StartContainer for \"3c8e83ba706a39b6192cf665b0bc7e44f54d8e28c345454fab3b9fb899b5b532\"" Sep 12 17:38:56.669079 containerd[1456]: time="2025-09-12T17:38:56.668479699Z" level=info msg="StopPodSandbox for \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\"" Sep 12 17:38:56.707972 systemd[1]: Started cri-containerd-90f34dfc78dcd7c6c6f04115efed183bc24fa13a67b716b88419b604d49e5baf.scope - libcontainer container 90f34dfc78dcd7c6c6f04115efed183bc24fa13a67b716b88419b604d49e5baf. Sep 12 17:38:56.753801 systemd[1]: Started cri-containerd-3c8e83ba706a39b6192cf665b0bc7e44f54d8e28c345454fab3b9fb899b5b532.scope - libcontainer container 3c8e83ba706a39b6192cf665b0bc7e44f54d8e28c345454fab3b9fb899b5b532. Sep 12 17:38:56.847813 containerd[1456]: time="2025-09-12T17:38:56.844836829Z" level=info msg="StartContainer for \"3c8e83ba706a39b6192cf665b0bc7e44f54d8e28c345454fab3b9fb899b5b532\" returns successfully" Sep 12 17:38:56.996138 containerd[1456]: 2025-09-12 17:38:56.852 [INFO][4237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Sep 12 17:38:56.996138 containerd[1456]: 2025-09-12 17:38:56.859 [INFO][4237] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" iface="eth0" netns="/var/run/netns/cni-92723871-05d1-aeb0-bdde-62ef5670b912" Sep 12 17:38:56.996138 containerd[1456]: 2025-09-12 17:38:56.861 [INFO][4237] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" iface="eth0" netns="/var/run/netns/cni-92723871-05d1-aeb0-bdde-62ef5670b912" Sep 12 17:38:56.996138 containerd[1456]: 2025-09-12 17:38:56.862 [INFO][4237] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" iface="eth0" netns="/var/run/netns/cni-92723871-05d1-aeb0-bdde-62ef5670b912" Sep 12 17:38:56.996138 containerd[1456]: 2025-09-12 17:38:56.862 [INFO][4237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Sep 12 17:38:56.996138 containerd[1456]: 2025-09-12 17:38:56.862 [INFO][4237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Sep 12 17:38:56.996138 containerd[1456]: 2025-09-12 17:38:56.936 [INFO][4285] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" HandleID="k8s-pod-network.829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:38:56.996138 containerd[1456]: 2025-09-12 17:38:56.938 [INFO][4285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:56.996138 containerd[1456]: 2025-09-12 17:38:56.939 [INFO][4285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:56.996138 containerd[1456]: 2025-09-12 17:38:56.972 [WARNING][4285] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" HandleID="k8s-pod-network.829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:38:56.996138 containerd[1456]: 2025-09-12 17:38:56.972 [INFO][4285] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" HandleID="k8s-pod-network.829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:38:56.996138 containerd[1456]: 2025-09-12 17:38:56.980 [INFO][4285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:56.996138 containerd[1456]: 2025-09-12 17:38:56.991 [INFO][4237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Sep 12 17:38:56.999092 containerd[1456]: time="2025-09-12T17:38:56.996933785Z" level=info msg="TearDown network for sandbox \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\" successfully" Sep 12 17:38:56.999092 containerd[1456]: time="2025-09-12T17:38:56.996986996Z" level=info msg="StopPodSandbox for \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\" returns successfully" Sep 12 17:38:56.999092 containerd[1456]: time="2025-09-12T17:38:56.997402797Z" level=info msg="StartContainer for \"90f34dfc78dcd7c6c6f04115efed183bc24fa13a67b716b88419b604d49e5baf\" returns successfully" Sep 12 17:38:57.002960 containerd[1456]: time="2025-09-12T17:38:57.002654059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:38:57.004327 containerd[1456]: time="2025-09-12T17:38:57.004153200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77ff955bcc-9474z,Uid:54a13898-563c-4fc8-9cbb-281683965c07,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:38:57.005787 systemd[1]: run-netns-cni\x2d92723871\x2d05d1\x2daeb0\x2dbdde\x2d62ef5670b912.mount: Deactivated successfully. Sep 12 17:38:57.129087 kubelet[2605]: I0912 17:38:57.128815 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-425vx" podStartSLOduration=39.12867307 podStartE2EDuration="39.12867307s" podCreationTimestamp="2025-09-12 17:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:38:57.125755107 +0000 UTC m=+44.649554880" watchObservedRunningTime="2025-09-12 17:38:57.12867307 +0000 UTC m=+44.652472842" Sep 12 17:38:57.290638 systemd-networkd[1368]: calif2a2a19aa13: Gained IPv6LL Sep 12 17:38:57.320855 systemd-networkd[1368]: calia38192e9c60: Link UP Sep 12 17:38:57.324313 systemd-networkd[1368]: calia38192e9c60: Gained carrier Sep 12 17:38:57.353719 systemd-networkd[1368]: vxlan.calico: Gained IPv6LL Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.180 [INFO][4313] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0 calico-apiserver-77ff955bcc- calico-apiserver 54a13898-563c-4fc8-9cbb-281683965c07 915 0 2025-09-12 17:38:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77ff955bcc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal calico-apiserver-77ff955bcc-9474z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia38192e9c60 [] [] }} ContainerID="3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-9474z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-" Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.181 [INFO][4313] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-9474z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.239 [INFO][4331] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" HandleID="k8s-pod-network.3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.239 [INFO][4331] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" HandleID="k8s-pod-network.3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", "pod":"calico-apiserver-77ff955bcc-9474z", "timestamp":"2025-09-12 17:38:57.239102196 +0000 UTC"}, Hostname:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.239 [INFO][4331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.239 [INFO][4331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.239 [INFO][4331] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal' Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.252 [INFO][4331] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.266 [INFO][4331] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.275 [INFO][4331] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.278 [INFO][4331] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.282 [INFO][4331] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.282 [INFO][4331] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.284 [INFO][4331] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.293 [INFO][4331] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.307 [INFO][4331] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.67/26] block=192.168.59.64/26 handle="k8s-pod-network.3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.307 [INFO][4331] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.67/26] handle="k8s-pod-network.3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.307 [INFO][4331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:57.373845 containerd[1456]: 2025-09-12 17:38:57.307 [INFO][4331] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.67/26] IPv6=[] ContainerID="3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" HandleID="k8s-pod-network.3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:38:57.376760 containerd[1456]: 2025-09-12 17:38:57.312 [INFO][4313] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-9474z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0", GenerateName:"calico-apiserver-77ff955bcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"54a13898-563c-4fc8-9cbb-281683965c07", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77ff955bcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-77ff955bcc-9474z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia38192e9c60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:57.376760 containerd[1456]: 2025-09-12 17:38:57.312 [INFO][4313] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.67/32] ContainerID="3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-9474z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:38:57.376760 containerd[1456]: 2025-09-12 17:38:57.312 [INFO][4313] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia38192e9c60 ContainerID="3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-9474z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:38:57.376760 containerd[1456]: 2025-09-12 17:38:57.329 [INFO][4313] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-9474z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:38:57.376760 containerd[1456]: 2025-09-12 17:38:57.333 [INFO][4313] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-9474z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0", GenerateName:"calico-apiserver-77ff955bcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"54a13898-563c-4fc8-9cbb-281683965c07", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77ff955bcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac", Pod:"calico-apiserver-77ff955bcc-9474z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia38192e9c60", MAC:"c6:e3:15:e6:8a:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:57.376760 containerd[1456]: 2025-09-12 17:38:57.367 [INFO][4313] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-9474z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:38:57.422775 containerd[1456]: time="2025-09-12T17:38:57.420079478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:57.422775 containerd[1456]: time="2025-09-12T17:38:57.420172601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:57.422775 containerd[1456]: time="2025-09-12T17:38:57.420200749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:57.422775 containerd[1456]: time="2025-09-12T17:38:57.420363877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:57.471803 systemd[1]: Started cri-containerd-3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac.scope - libcontainer container 3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac. Sep 12 17:38:57.551198 containerd[1456]: time="2025-09-12T17:38:57.551132646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77ff955bcc-9474z,Uid:54a13898-563c-4fc8-9cbb-281683965c07,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac\"" Sep 12 17:38:57.662819 containerd[1456]: time="2025-09-12T17:38:57.661498202Z" level=info msg="StopPodSandbox for \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\"" Sep 12 17:38:57.662819 containerd[1456]: time="2025-09-12T17:38:57.661578118Z" level=info msg="StopPodSandbox for \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\"" Sep 12 17:38:57.879179 containerd[1456]: 2025-09-12 17:38:57.770 [INFO][4436] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Sep 12 17:38:57.879179 containerd[1456]: 2025-09-12 17:38:57.771 [INFO][4436] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" iface="eth0" netns="/var/run/netns/cni-8429ead1-bbfb-d9b7-3077-ecc912e967ba" Sep 12 17:38:57.879179 containerd[1456]: 2025-09-12 17:38:57.772 [INFO][4436] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" iface="eth0" netns="/var/run/netns/cni-8429ead1-bbfb-d9b7-3077-ecc912e967ba" Sep 12 17:38:57.879179 containerd[1456]: 2025-09-12 17:38:57.773 [INFO][4436] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" iface="eth0" netns="/var/run/netns/cni-8429ead1-bbfb-d9b7-3077-ecc912e967ba" Sep 12 17:38:57.879179 containerd[1456]: 2025-09-12 17:38:57.773 [INFO][4436] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Sep 12 17:38:57.879179 containerd[1456]: 2025-09-12 17:38:57.773 [INFO][4436] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Sep 12 17:38:57.879179 containerd[1456]: 2025-09-12 17:38:57.849 [INFO][4450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" HandleID="k8s-pod-network.4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:38:57.879179 containerd[1456]: 2025-09-12 17:38:57.850 [INFO][4450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:57.879179 containerd[1456]: 2025-09-12 17:38:57.853 [INFO][4450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:57.879179 containerd[1456]: 2025-09-12 17:38:57.871 [WARNING][4450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" HandleID="k8s-pod-network.4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:38:57.879179 containerd[1456]: 2025-09-12 17:38:57.871 [INFO][4450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" HandleID="k8s-pod-network.4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:38:57.879179 containerd[1456]: 2025-09-12 17:38:57.874 [INFO][4450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:57.879179 containerd[1456]: 2025-09-12 17:38:57.877 [INFO][4436] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Sep 12 17:38:57.881380 containerd[1456]: time="2025-09-12T17:38:57.879790915Z" level=info msg="TearDown network for sandbox \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\" successfully" Sep 12 17:38:57.881380 containerd[1456]: time="2025-09-12T17:38:57.879830772Z" level=info msg="StopPodSandbox for \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\" returns successfully" Sep 12 17:38:57.881380 containerd[1456]: time="2025-09-12T17:38:57.880787850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whntj,Uid:3a3ff327-190a-4ebe-85de-f1209e1870ef,Namespace:calico-system,Attempt:1,}" Sep 12 17:38:57.889133 systemd[1]: run-netns-cni\x2d8429ead1\x2dbbfb\x2dd9b7\x2d3077\x2decc912e967ba.mount: Deactivated successfully. Sep 12 17:38:57.923774 containerd[1456]: 2025-09-12 17:38:57.776 [INFO][4435] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Sep 12 17:38:57.923774 containerd[1456]: 2025-09-12 17:38:57.776 [INFO][4435] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" iface="eth0" netns="/var/run/netns/cni-1e1ce318-63c0-2123-bbf6-142699638e2f" Sep 12 17:38:57.923774 containerd[1456]: 2025-09-12 17:38:57.776 [INFO][4435] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" iface="eth0" netns="/var/run/netns/cni-1e1ce318-63c0-2123-bbf6-142699638e2f" Sep 12 17:38:57.923774 containerd[1456]: 2025-09-12 17:38:57.777 [INFO][4435] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" iface="eth0" netns="/var/run/netns/cni-1e1ce318-63c0-2123-bbf6-142699638e2f" Sep 12 17:38:57.923774 containerd[1456]: 2025-09-12 17:38:57.777 [INFO][4435] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Sep 12 17:38:57.923774 containerd[1456]: 2025-09-12 17:38:57.777 [INFO][4435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Sep 12 17:38:57.923774 containerd[1456]: 2025-09-12 17:38:57.873 [INFO][4452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" HandleID="k8s-pod-network.6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:38:57.923774 containerd[1456]: 2025-09-12 17:38:57.874 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:57.923774 containerd[1456]: 2025-09-12 17:38:57.874 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:57.923774 containerd[1456]: 2025-09-12 17:38:57.898 [WARNING][4452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" HandleID="k8s-pod-network.6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:38:57.923774 containerd[1456]: 2025-09-12 17:38:57.899 [INFO][4452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" HandleID="k8s-pod-network.6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:38:57.923774 containerd[1456]: 2025-09-12 17:38:57.904 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:57.923774 containerd[1456]: 2025-09-12 17:38:57.919 [INFO][4435] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Sep 12 17:38:57.927052 containerd[1456]: time="2025-09-12T17:38:57.923894606Z" level=info msg="TearDown network for sandbox \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\" successfully" Sep 12 17:38:57.927052 containerd[1456]: time="2025-09-12T17:38:57.923931248Z" level=info msg="StopPodSandbox for \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\" returns successfully" Sep 12 17:38:57.930045 containerd[1456]: time="2025-09-12T17:38:57.929252927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-q5wmn,Uid:6d036e19-1f2c-48c7-8f67-1bff02bce89d,Namespace:calico-system,Attempt:1,}" Sep 12 17:38:57.931189 systemd[1]: run-netns-cni\x2d1e1ce318\x2d63c0\x2d2123\x2dbbf6\x2d142699638e2f.mount: Deactivated successfully. Sep 12 17:38:58.176768 systemd-networkd[1368]: calic67dc804e6c: Link UP Sep 12 17:38:58.184976 systemd-networkd[1368]: calic67dc804e6c: Gained carrier Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:57.994 [INFO][4464] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0 csi-node-driver- calico-system 3a3ff327-190a-4ebe-85de-f1209e1870ef 934 0 2025-09-12 17:38:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal csi-node-driver-whntj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic67dc804e6c [] [] }} ContainerID="ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" Namespace="calico-system" Pod="csi-node-driver-whntj" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-" Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:57.994 [INFO][4464] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" Namespace="calico-system" Pod="csi-node-driver-whntj" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.073 [INFO][4486] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" HandleID="k8s-pod-network.ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.073 [INFO][4486] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" HandleID="k8s-pod-network.ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5a30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", "pod":"csi-node-driver-whntj", "timestamp":"2025-09-12 17:38:58.073230718 +0000 UTC"}, Hostname:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.074 [INFO][4486] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.074 [INFO][4486] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.074 [INFO][4486] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal' Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.091 [INFO][4486] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.102 [INFO][4486] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.119 [INFO][4486] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.122 [INFO][4486] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.129 [INFO][4486] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.129 [INFO][4486] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.133 [INFO][4486] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196 Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.146 [INFO][4486] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.162 [INFO][4486] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.68/26] block=192.168.59.64/26 handle="k8s-pod-network.ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.162 [INFO][4486] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.68/26] handle="k8s-pod-network.ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.162 [INFO][4486] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:58.232152 containerd[1456]: 2025-09-12 17:38:58.162 [INFO][4486] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.68/26] IPv6=[] ContainerID="ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" HandleID="k8s-pod-network.ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:38:58.233337 containerd[1456]: 2025-09-12 17:38:58.166 [INFO][4464] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" Namespace="calico-system" Pod="csi-node-driver-whntj" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3a3ff327-190a-4ebe-85de-f1209e1870ef", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-whntj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic67dc804e6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:58.233337 containerd[1456]: 2025-09-12 17:38:58.166 [INFO][4464] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.68/32] ContainerID="ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" Namespace="calico-system" Pod="csi-node-driver-whntj" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:38:58.233337 containerd[1456]: 2025-09-12 17:38:58.166 [INFO][4464] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic67dc804e6c ContainerID="ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" Namespace="calico-system" Pod="csi-node-driver-whntj" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:38:58.233337 containerd[1456]: 2025-09-12 17:38:58.184 [INFO][4464] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" Namespace="calico-system" Pod="csi-node-driver-whntj" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:38:58.233337 containerd[1456]: 2025-09-12 17:38:58.196 [INFO][4464] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" Namespace="calico-system" Pod="csi-node-driver-whntj" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3a3ff327-190a-4ebe-85de-f1209e1870ef", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196", Pod:"csi-node-driver-whntj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic67dc804e6c", MAC:"42:45:3d:37:9b:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:58.233337 containerd[1456]: 2025-09-12 17:38:58.226 [INFO][4464] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196" Namespace="calico-system" Pod="csi-node-driver-whntj" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:38:58.304297 containerd[1456]: time="2025-09-12T17:38:58.303978584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:58.304297 containerd[1456]: time="2025-09-12T17:38:58.304067863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:58.304297 containerd[1456]: time="2025-09-12T17:38:58.304118601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:58.304797 containerd[1456]: time="2025-09-12T17:38:58.304345247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:58.327615 systemd-networkd[1368]: calic2c0021bac7: Link UP Sep 12 17:38:58.331030 systemd-networkd[1368]: calic2c0021bac7: Gained carrier Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.094 [INFO][4476] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0 goldmane-7988f88666- calico-system 6d036e19-1f2c-48c7-8f67-1bff02bce89d 933 0 2025-09-12 17:38:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal goldmane-7988f88666-q5wmn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic2c0021bac7 [] [] }} ContainerID="7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" Namespace="calico-system" Pod="goldmane-7988f88666-q5wmn" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-" Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.095 [INFO][4476] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" Namespace="calico-system" Pod="goldmane-7988f88666-q5wmn" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.186 [INFO][4496] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" HandleID="k8s-pod-network.7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.187 [INFO][4496] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" HandleID="k8s-pod-network.7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032f8e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", "pod":"goldmane-7988f88666-q5wmn", "timestamp":"2025-09-12 17:38:58.186723804 +0000 UTC"}, Hostname:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.187 [INFO][4496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.187 [INFO][4496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.188 [INFO][4496] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal' Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.203 [INFO][4496] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.212 [INFO][4496] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.223 [INFO][4496] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.237 [INFO][4496] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.246 [INFO][4496] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.247 [INFO][4496] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.258 [INFO][4496] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11 Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.276 [INFO][4496] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.303 [INFO][4496] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.69/26] block=192.168.59.64/26 handle="k8s-pod-network.7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.310 [INFO][4496] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.69/26] handle="k8s-pod-network.7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.310 [INFO][4496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:58.370036 containerd[1456]: 2025-09-12 17:38:58.310 [INFO][4496] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.69/26] IPv6=[] ContainerID="7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" HandleID="k8s-pod-network.7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:38:58.374870 containerd[1456]: 2025-09-12 17:38:58.314 [INFO][4476] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" Namespace="calico-system" Pod="goldmane-7988f88666-q5wmn" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6d036e19-1f2c-48c7-8f67-1bff02bce89d", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-7988f88666-q5wmn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic2c0021bac7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:58.374870 containerd[1456]: 2025-09-12 17:38:58.314 [INFO][4476] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.69/32] ContainerID="7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" Namespace="calico-system" Pod="goldmane-7988f88666-q5wmn" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:38:58.374870 containerd[1456]: 2025-09-12 17:38:58.314 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2c0021bac7 ContainerID="7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" Namespace="calico-system" Pod="goldmane-7988f88666-q5wmn" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:38:58.374870 containerd[1456]: 2025-09-12 17:38:58.332 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" Namespace="calico-system" Pod="goldmane-7988f88666-q5wmn" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:38:58.374870 containerd[1456]: 2025-09-12 17:38:58.334 [INFO][4476] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" Namespace="calico-system" Pod="goldmane-7988f88666-q5wmn" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6d036e19-1f2c-48c7-8f67-1bff02bce89d", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11", Pod:"goldmane-7988f88666-q5wmn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic2c0021bac7", MAC:"1e:62:f9:31:75:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:58.374870 containerd[1456]: 2025-09-12 17:38:58.363 [INFO][4476] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11" Namespace="calico-system" Pod="goldmane-7988f88666-q5wmn" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:38:58.384790 systemd[1]: Started cri-containerd-ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196.scope - libcontainer container ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196. Sep 12 17:38:58.471564 containerd[1456]: time="2025-09-12T17:38:58.471443428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:58.472187 containerd[1456]: time="2025-09-12T17:38:58.471858445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:58.473812 containerd[1456]: time="2025-09-12T17:38:58.473147884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:58.474467 containerd[1456]: time="2025-09-12T17:38:58.474045121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:58.483848 containerd[1456]: time="2025-09-12T17:38:58.483798747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whntj,Uid:3a3ff327-190a-4ebe-85de-f1209e1870ef,Namespace:calico-system,Attempt:1,} returns sandbox id \"ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196\"" Sep 12 17:38:58.510260 systemd[1]: Started cri-containerd-7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11.scope - libcontainer container 7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11. Sep 12 17:38:58.600050 containerd[1456]: time="2025-09-12T17:38:58.599983214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-q5wmn,Uid:6d036e19-1f2c-48c7-8f67-1bff02bce89d,Namespace:calico-system,Attempt:1,} returns sandbox id \"7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11\"" Sep 12 17:38:58.665079 containerd[1456]: time="2025-09-12T17:38:58.663977477Z" level=info msg="StopPodSandbox for \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\"" Sep 12 17:38:58.887082 containerd[1456]: 2025-09-12 17:38:58.784 [INFO][4620] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Sep 12 17:38:58.887082 containerd[1456]: 2025-09-12 17:38:58.788 [INFO][4620] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" iface="eth0" netns="/var/run/netns/cni-9fb2c1a0-7ff3-9f8e-3a65-4341dc62cf23" Sep 12 17:38:58.887082 containerd[1456]: 2025-09-12 17:38:58.789 [INFO][4620] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" iface="eth0" netns="/var/run/netns/cni-9fb2c1a0-7ff3-9f8e-3a65-4341dc62cf23" Sep 12 17:38:58.887082 containerd[1456]: 2025-09-12 17:38:58.789 [INFO][4620] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" iface="eth0" netns="/var/run/netns/cni-9fb2c1a0-7ff3-9f8e-3a65-4341dc62cf23" Sep 12 17:38:58.887082 containerd[1456]: 2025-09-12 17:38:58.789 [INFO][4620] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Sep 12 17:38:58.887082 containerd[1456]: 2025-09-12 17:38:58.789 [INFO][4620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Sep 12 17:38:58.887082 containerd[1456]: 2025-09-12 17:38:58.849 [INFO][4627] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" HandleID="k8s-pod-network.de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:38:58.887082 containerd[1456]: 2025-09-12 17:38:58.849 [INFO][4627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:58.887082 containerd[1456]: 2025-09-12 17:38:58.849 [INFO][4627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:58.887082 containerd[1456]: 2025-09-12 17:38:58.872 [WARNING][4627] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" HandleID="k8s-pod-network.de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:38:58.887082 containerd[1456]: 2025-09-12 17:38:58.872 [INFO][4627] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" HandleID="k8s-pod-network.de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:38:58.887082 containerd[1456]: 2025-09-12 17:38:58.879 [INFO][4627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:58.887082 containerd[1456]: 2025-09-12 17:38:58.881 [INFO][4620] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Sep 12 17:38:58.897134 containerd[1456]: time="2025-09-12T17:38:58.894577075Z" level=info msg="TearDown network for sandbox \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\" successfully" Sep 12 17:38:58.897134 containerd[1456]: time="2025-09-12T17:38:58.894631725Z" level=info msg="StopPodSandbox for \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\" returns successfully" Sep 12 17:38:58.897335 containerd[1456]: time="2025-09-12T17:38:58.897273094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b5f5566b-zxw5v,Uid:5dc1035e-b888-4db3-8f54-9136957333b5,Namespace:calico-system,Attempt:1,}" Sep 12 17:38:58.899011 systemd[1]: run-netns-cni\x2d9fb2c1a0\x2d7ff3\x2d9f8e\x2d3a65\x2d4341dc62cf23.mount: Deactivated successfully. Sep 12 17:38:59.205105 systemd-networkd[1368]: cali455a91e0b88: Link UP Sep 12 17:38:59.205496 systemd-networkd[1368]: cali455a91e0b88: Gained carrier Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.034 [INFO][4633] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0 calico-kube-controllers-76b5f5566b- calico-system 5dc1035e-b888-4db3-8f54-9136957333b5 947 0 2025-09-12 17:38:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76b5f5566b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal calico-kube-controllers-76b5f5566b-zxw5v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali455a91e0b88 [] [] }} ContainerID="3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" Namespace="calico-system" Pod="calico-kube-controllers-76b5f5566b-zxw5v" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-" Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.034 [INFO][4633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" Namespace="calico-system" Pod="calico-kube-controllers-76b5f5566b-zxw5v" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.103 [INFO][4648] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" HandleID="k8s-pod-network.3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.106 [INFO][4648] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" HandleID="k8s-pod-network.3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000320ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", "pod":"calico-kube-controllers-76b5f5566b-zxw5v", "timestamp":"2025-09-12 17:38:59.103700178 +0000 UTC"}, Hostname:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.106 [INFO][4648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.106 [INFO][4648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.106 [INFO][4648] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal' Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.118 [INFO][4648] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.130 [INFO][4648] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.139 [INFO][4648] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.142 [INFO][4648] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.147 [INFO][4648] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.147 [INFO][4648] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.152 [INFO][4648] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1 Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.163 [INFO][4648] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.181 [INFO][4648] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.70/26] block=192.168.59.64/26 handle="k8s-pod-network.3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.181 [INFO][4648] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.70/26] handle="k8s-pod-network.3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.181 [INFO][4648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:59.243590 containerd[1456]: 2025-09-12 17:38:59.181 [INFO][4648] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.70/26] IPv6=[] ContainerID="3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" HandleID="k8s-pod-network.3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:38:59.246647 containerd[1456]: 2025-09-12 17:38:59.189 [INFO][4633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" Namespace="calico-system" Pod="calico-kube-controllers-76b5f5566b-zxw5v" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0", GenerateName:"calico-kube-controllers-76b5f5566b-", Namespace:"calico-system", SelfLink:"", UID:"5dc1035e-b888-4db3-8f54-9136957333b5", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76b5f5566b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-76b5f5566b-zxw5v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali455a91e0b88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:59.246647 containerd[1456]: 2025-09-12 17:38:59.190 [INFO][4633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.70/32] ContainerID="3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" Namespace="calico-system" Pod="calico-kube-controllers-76b5f5566b-zxw5v" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:38:59.246647 containerd[1456]: 2025-09-12 17:38:59.190 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali455a91e0b88 ContainerID="3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" Namespace="calico-system" Pod="calico-kube-controllers-76b5f5566b-zxw5v" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:38:59.246647 containerd[1456]: 2025-09-12 17:38:59.206 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" Namespace="calico-system" Pod="calico-kube-controllers-76b5f5566b-zxw5v" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:38:59.246647 containerd[1456]: 2025-09-12 17:38:59.207 [INFO][4633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" Namespace="calico-system" Pod="calico-kube-controllers-76b5f5566b-zxw5v" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0", GenerateName:"calico-kube-controllers-76b5f5566b-", Namespace:"calico-system", SelfLink:"", UID:"5dc1035e-b888-4db3-8f54-9136957333b5", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76b5f5566b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1", Pod:"calico-kube-controllers-76b5f5566b-zxw5v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali455a91e0b88", MAC:"72:94:9e:70:c1:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:59.246647 containerd[1456]: 2025-09-12 17:38:59.233 [INFO][4633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1" Namespace="calico-system" Pod="calico-kube-controllers-76b5f5566b-zxw5v" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:38:59.315512 containerd[1456]: time="2025-09-12T17:38:59.314200020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:59.315512 containerd[1456]: time="2025-09-12T17:38:59.314298035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:59.315512 containerd[1456]: time="2025-09-12T17:38:59.314325971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:59.315512 containerd[1456]: time="2025-09-12T17:38:59.314452004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:59.337733 systemd-networkd[1368]: calia38192e9c60: Gained IPv6LL Sep 12 17:38:59.372808 systemd[1]: Started cri-containerd-3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1.scope - libcontainer container 3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1. Sep 12 17:38:59.473233 containerd[1456]: time="2025-09-12T17:38:59.473066062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b5f5566b-zxw5v,Uid:5dc1035e-b888-4db3-8f54-9136957333b5,Namespace:calico-system,Attempt:1,} returns sandbox id \"3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1\"" Sep 12 17:38:59.786854 systemd-networkd[1368]: calic2c0021bac7: Gained IPv6LL Sep 12 17:38:59.850226 systemd-networkd[1368]: calic67dc804e6c: Gained IPv6LL Sep 12 17:38:59.971790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount756351006.mount: Deactivated successfully. Sep 12 17:39:00.004627 containerd[1456]: time="2025-09-12T17:39:00.004566633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:00.006219 containerd[1456]: time="2025-09-12T17:39:00.006158156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:39:00.008009 containerd[1456]: time="2025-09-12T17:39:00.007960569Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:00.014555 containerd[1456]: time="2025-09-12T17:39:00.013353496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:00.015015 containerd[1456]: time="2025-09-12T17:39:00.014962337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.01225769s" Sep 12 17:39:00.015248 containerd[1456]: time="2025-09-12T17:39:00.015155906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:39:00.018388 containerd[1456]: time="2025-09-12T17:39:00.018346105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:39:00.019673 containerd[1456]: time="2025-09-12T17:39:00.019623981Z" level=info msg="CreateContainer within sandbox \"c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:39:00.045586 containerd[1456]: time="2025-09-12T17:39:00.045306440Z" level=info msg="CreateContainer within sandbox \"c2483f82aa7f86b6c70b0f86aa052f5810d64dd80ac7a957e2d71dda9b627d7c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1aec24982934fd311d2ddd695db26654ed17e6ed57d205681089c5753f4e17f2\"" Sep 12 17:39:00.047937 containerd[1456]: time="2025-09-12T17:39:00.046396711Z" level=info msg="StartContainer for \"1aec24982934fd311d2ddd695db26654ed17e6ed57d205681089c5753f4e17f2\"" Sep 12 17:39:00.111843 systemd[1]: Started cri-containerd-1aec24982934fd311d2ddd695db26654ed17e6ed57d205681089c5753f4e17f2.scope - libcontainer container 1aec24982934fd311d2ddd695db26654ed17e6ed57d205681089c5753f4e17f2. Sep 12 17:39:00.253098 containerd[1456]: time="2025-09-12T17:39:00.252926377Z" level=info msg="StartContainer for \"1aec24982934fd311d2ddd695db26654ed17e6ed57d205681089c5753f4e17f2\" returns successfully" Sep 12 17:39:00.662191 containerd[1456]: time="2025-09-12T17:39:00.661696147Z" level=info msg="StopPodSandbox for \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\"" Sep 12 17:39:00.668447 containerd[1456]: time="2025-09-12T17:39:00.668385569Z" level=info msg="StopPodSandbox for \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\"" Sep 12 17:39:00.868103 containerd[1456]: 2025-09-12 17:39:00.768 [INFO][4763] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Sep 12 17:39:00.868103 containerd[1456]: 2025-09-12 17:39:00.770 [INFO][4763] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" iface="eth0" netns="/var/run/netns/cni-9d73f044-d86a-3953-bf29-6014d6c6a215" Sep 12 17:39:00.868103 containerd[1456]: 2025-09-12 17:39:00.771 [INFO][4763] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" iface="eth0" netns="/var/run/netns/cni-9d73f044-d86a-3953-bf29-6014d6c6a215" Sep 12 17:39:00.868103 containerd[1456]: 2025-09-12 17:39:00.771 [INFO][4763] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" iface="eth0" netns="/var/run/netns/cni-9d73f044-d86a-3953-bf29-6014d6c6a215" Sep 12 17:39:00.868103 containerd[1456]: 2025-09-12 17:39:00.772 [INFO][4763] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Sep 12 17:39:00.868103 containerd[1456]: 2025-09-12 17:39:00.772 [INFO][4763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Sep 12 17:39:00.868103 containerd[1456]: 2025-09-12 17:39:00.832 [INFO][4775] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" HandleID="k8s-pod-network.4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:00.868103 containerd[1456]: 2025-09-12 17:39:00.834 [INFO][4775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:00.868103 containerd[1456]: 2025-09-12 17:39:00.834 [INFO][4775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:00.868103 containerd[1456]: 2025-09-12 17:39:00.856 [WARNING][4775] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" HandleID="k8s-pod-network.4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:00.868103 containerd[1456]: 2025-09-12 17:39:00.856 [INFO][4775] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" HandleID="k8s-pod-network.4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:00.868103 containerd[1456]: 2025-09-12 17:39:00.859 [INFO][4775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:00.868103 containerd[1456]: 2025-09-12 17:39:00.863 [INFO][4763] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Sep 12 17:39:00.876544 containerd[1456]: time="2025-09-12T17:39:00.873832603Z" level=info msg="TearDown network for sandbox \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\" successfully" Sep 12 17:39:00.876826 containerd[1456]: time="2025-09-12T17:39:00.876757049Z" level=info msg="StopPodSandbox for \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\" returns successfully" Sep 12 17:39:00.880048 containerd[1456]: time="2025-09-12T17:39:00.880007793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4768z,Uid:c9918ac5-51bf-44cc-9226-aa00d6fecc77,Namespace:kube-system,Attempt:1,}" Sep 12 17:39:00.881006 systemd[1]: run-netns-cni\x2d9d73f044\x2dd86a\x2d3953\x2dbf29\x2d6014d6c6a215.mount: Deactivated successfully. Sep 12 17:39:00.941535 containerd[1456]: 2025-09-12 17:39:00.787 [INFO][4762] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Sep 12 17:39:00.941535 containerd[1456]: 2025-09-12 17:39:00.787 [INFO][4762] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" iface="eth0" netns="/var/run/netns/cni-06c1a4fb-8eab-505b-bd7e-9c61e9f35d43" Sep 12 17:39:00.941535 containerd[1456]: 2025-09-12 17:39:00.789 [INFO][4762] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" iface="eth0" netns="/var/run/netns/cni-06c1a4fb-8eab-505b-bd7e-9c61e9f35d43" Sep 12 17:39:00.941535 containerd[1456]: 2025-09-12 17:39:00.791 [INFO][4762] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" iface="eth0" netns="/var/run/netns/cni-06c1a4fb-8eab-505b-bd7e-9c61e9f35d43" Sep 12 17:39:00.941535 containerd[1456]: 2025-09-12 17:39:00.791 [INFO][4762] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Sep 12 17:39:00.941535 containerd[1456]: 2025-09-12 17:39:00.791 [INFO][4762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Sep 12 17:39:00.941535 containerd[1456]: 2025-09-12 17:39:00.882 [INFO][4780] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" HandleID="k8s-pod-network.d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:00.941535 containerd[1456]: 2025-09-12 17:39:00.883 [INFO][4780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:00.941535 containerd[1456]: 2025-09-12 17:39:00.883 [INFO][4780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:00.941535 containerd[1456]: 2025-09-12 17:39:00.903 [WARNING][4780] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" HandleID="k8s-pod-network.d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:00.941535 containerd[1456]: 2025-09-12 17:39:00.905 [INFO][4780] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" HandleID="k8s-pod-network.d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:00.941535 containerd[1456]: 2025-09-12 17:39:00.913 [INFO][4780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:00.941535 containerd[1456]: 2025-09-12 17:39:00.927 [INFO][4762] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Sep 12 17:39:00.941535 containerd[1456]: time="2025-09-12T17:39:00.933127663Z" level=info msg="TearDown network for sandbox \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\" successfully" Sep 12 17:39:00.941535 containerd[1456]: time="2025-09-12T17:39:00.933576653Z" level=info msg="StopPodSandbox for \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\" returns successfully" Sep 12 17:39:00.946240 systemd[1]: run-netns-cni\x2d06c1a4fb\x2d8eab\x2d505b\x2dbd7e\x2d9c61e9f35d43.mount: Deactivated successfully. Sep 12 17:39:00.955357 containerd[1456]: time="2025-09-12T17:39:00.955036670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77ff955bcc-6mgcv,Uid:a81aa3ac-9352-4654-98c8-0683fd08fedb,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:39:01.131397 systemd-networkd[1368]: cali455a91e0b88: Gained IPv6LL Sep 12 17:39:01.296252 systemd-networkd[1368]: cali923faba856a: Link UP Sep 12 17:39:01.299978 systemd-networkd[1368]: cali923faba856a: Gained carrier Sep 12 17:39:01.336451 kubelet[2605]: I0912 17:39:01.335290 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-58fb5c669d-9hcnz" podStartSLOduration=2.350965351 podStartE2EDuration="7.335264391s" podCreationTimestamp="2025-09-12 17:38:54 +0000 UTC" firstStartedPulling="2025-09-12 17:38:55.032730602 +0000 UTC m=+42.556530354" lastFinishedPulling="2025-09-12 17:39:00.017029605 +0000 UTC m=+47.540829394" observedRunningTime="2025-09-12 17:39:01.212896865 +0000 UTC m=+48.736696636" watchObservedRunningTime="2025-09-12 17:39:01.335264391 +0000 UTC m=+48.859064279" Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.067 [INFO][4789] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0 coredns-7c65d6cfc9- kube-system c9918ac5-51bf-44cc-9226-aa00d6fecc77 961 0 2025-09-12 17:38:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal coredns-7c65d6cfc9-4768z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali923faba856a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4768z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-" Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.068 [INFO][4789] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4768z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.167 [INFO][4816] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" HandleID="k8s-pod-network.5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.168 [INFO][4816] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" HandleID="k8s-pod-network.5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cde20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", "pod":"coredns-7c65d6cfc9-4768z", "timestamp":"2025-09-12 17:39:01.167396262 +0000 UTC"}, Hostname:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.168 [INFO][4816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.169 [INFO][4816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.169 [INFO][4816] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal' Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.189 [INFO][4816] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.217 [INFO][4816] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.235 [INFO][4816] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.244 [INFO][4816] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.252 [INFO][4816] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.252 [INFO][4816] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.255 [INFO][4816] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0 Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.264 [INFO][4816] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.275 [INFO][4816] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.71/26] block=192.168.59.64/26 handle="k8s-pod-network.5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.276 [INFO][4816] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.71/26] handle="k8s-pod-network.5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.276 [INFO][4816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:01.344400 containerd[1456]: 2025-09-12 17:39:01.276 [INFO][4816] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.71/26] IPv6=[] ContainerID="5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" HandleID="k8s-pod-network.5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:01.348876 containerd[1456]: 2025-09-12 17:39:01.283 [INFO][4789] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4768z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c9918ac5-51bf-44cc-9226-aa00d6fecc77", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7c65d6cfc9-4768z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali923faba856a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:01.348876 containerd[1456]: 2025-09-12 17:39:01.285 [INFO][4789] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.71/32] ContainerID="5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4768z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:01.348876 containerd[1456]: 2025-09-12 17:39:01.285 [INFO][4789] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali923faba856a ContainerID="5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4768z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:01.348876 containerd[1456]: 2025-09-12 17:39:01.298 [INFO][4789] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4768z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:01.348876 containerd[1456]: 2025-09-12 17:39:01.301 [INFO][4789] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4768z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c9918ac5-51bf-44cc-9226-aa00d6fecc77", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0", Pod:"coredns-7c65d6cfc9-4768z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali923faba856a", MAC:"7e:12:c0:fa:cd:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:01.348876 containerd[1456]: 2025-09-12 17:39:01.336 [INFO][4789] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4768z" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:01.452707 systemd-networkd[1368]: cali7360b0ee64a: Link UP Sep 12 17:39:01.458043 systemd-networkd[1368]: cali7360b0ee64a: Gained carrier Sep 12 17:39:01.485342 containerd[1456]: time="2025-09-12T17:39:01.484543900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:01.485555 containerd[1456]: time="2025-09-12T17:39:01.485306997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:01.485555 containerd[1456]: time="2025-09-12T17:39:01.485330369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:01.486282 containerd[1456]: time="2025-09-12T17:39:01.486220403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.104 [INFO][4801] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0 calico-apiserver-77ff955bcc- calico-apiserver a81aa3ac-9352-4654-98c8-0683fd08fedb 962 0 2025-09-12 17:38:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77ff955bcc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal calico-apiserver-77ff955bcc-6mgcv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7360b0ee64a [] [] }} ContainerID="94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-6mgcv" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-" Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.104 [INFO][4801] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-6mgcv" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.215 [INFO][4821] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" HandleID="k8s-pod-network.94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.221 [INFO][4821] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" HandleID="k8s-pod-network.94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", "pod":"calico-apiserver-77ff955bcc-6mgcv", "timestamp":"2025-09-12 17:39:01.215945058 +0000 UTC"}, Hostname:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.221 [INFO][4821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.277 [INFO][4821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.277 [INFO][4821] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal' Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.306 [INFO][4821] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.318 [INFO][4821] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.351 [INFO][4821] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.358 [INFO][4821] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.366 [INFO][4821] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.366 [INFO][4821] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.372 [INFO][4821] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0 Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.387 [INFO][4821] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.408 [INFO][4821] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.72/26] block=192.168.59.64/26 handle="k8s-pod-network.94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.409 [INFO][4821] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.72/26] handle="k8s-pod-network.94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" host="ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal" Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.409 [INFO][4821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:01.510818 containerd[1456]: 2025-09-12 17:39:01.409 [INFO][4821] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.72/26] IPv6=[] ContainerID="94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" HandleID="k8s-pod-network.94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:01.512868 containerd[1456]: 2025-09-12 17:39:01.421 [INFO][4801] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-6mgcv" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0", GenerateName:"calico-apiserver-77ff955bcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"a81aa3ac-9352-4654-98c8-0683fd08fedb", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77ff955bcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-77ff955bcc-6mgcv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7360b0ee64a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:01.512868 containerd[1456]: 2025-09-12 17:39:01.421 [INFO][4801] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.72/32] ContainerID="94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-6mgcv" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:01.512868 containerd[1456]: 2025-09-12 17:39:01.421 [INFO][4801] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7360b0ee64a ContainerID="94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-6mgcv" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:01.512868 containerd[1456]: 2025-09-12 17:39:01.470 [INFO][4801] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-6mgcv" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:01.512868 containerd[1456]: 2025-09-12 17:39:01.478 [INFO][4801] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-6mgcv" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0", GenerateName:"calico-apiserver-77ff955bcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"a81aa3ac-9352-4654-98c8-0683fd08fedb", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77ff955bcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0", Pod:"calico-apiserver-77ff955bcc-6mgcv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7360b0ee64a", MAC:"f6:9c:33:e2:ab:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:01.512868 containerd[1456]: 2025-09-12 17:39:01.504 [INFO][4801] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0" Namespace="calico-apiserver" Pod="calico-apiserver-77ff955bcc-6mgcv" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:01.570162 systemd[1]: Started cri-containerd-5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0.scope - libcontainer container 5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0. Sep 12 17:39:01.595163 containerd[1456]: time="2025-09-12T17:39:01.593140947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:01.595609 containerd[1456]: time="2025-09-12T17:39:01.595413423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:01.595609 containerd[1456]: time="2025-09-12T17:39:01.595481131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:01.598976 containerd[1456]: time="2025-09-12T17:39:01.598066165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:01.687783 systemd[1]: Started cri-containerd-94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0.scope - libcontainer container 94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0. Sep 12 17:39:01.761081 containerd[1456]: time="2025-09-12T17:39:01.760406817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4768z,Uid:c9918ac5-51bf-44cc-9226-aa00d6fecc77,Namespace:kube-system,Attempt:1,} returns sandbox id \"5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0\"" Sep 12 17:39:01.770141 containerd[1456]: time="2025-09-12T17:39:01.769793761Z" level=info msg="CreateContainer within sandbox \"5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:39:01.800897 containerd[1456]: time="2025-09-12T17:39:01.800835975Z" level=info msg="CreateContainer within sandbox \"5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e74ca0b5c9b06971318d0eb3ee93ee0f5346b96c63505269806deb590811d37\"" Sep 12 17:39:01.802525 containerd[1456]: time="2025-09-12T17:39:01.802376497Z" level=info msg="StartContainer for \"3e74ca0b5c9b06971318d0eb3ee93ee0f5346b96c63505269806deb590811d37\"" Sep 12 17:39:01.926344 systemd[1]: Started cri-containerd-3e74ca0b5c9b06971318d0eb3ee93ee0f5346b96c63505269806deb590811d37.scope - libcontainer container 3e74ca0b5c9b06971318d0eb3ee93ee0f5346b96c63505269806deb590811d37. Sep 12 17:39:02.005723 containerd[1456]: time="2025-09-12T17:39:02.005661119Z" level=info msg="StartContainer for \"3e74ca0b5c9b06971318d0eb3ee93ee0f5346b96c63505269806deb590811d37\" returns successfully" Sep 12 17:39:02.058655 containerd[1456]: time="2025-09-12T17:39:02.058563507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77ff955bcc-6mgcv,Uid:a81aa3ac-9352-4654-98c8-0683fd08fedb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0\"" Sep 12 17:39:02.223867 kubelet[2605]: I0912 17:39:02.223748 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4768z" podStartSLOduration=45.223719676 podStartE2EDuration="45.223719676s" podCreationTimestamp="2025-09-12 17:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:02.198468988 +0000 UTC m=+49.722268772" watchObservedRunningTime="2025-09-12 17:39:02.223719676 +0000 UTC m=+49.747519450" Sep 12 17:39:02.670595 systemd-networkd[1368]: cali7360b0ee64a: Gained IPv6LL Sep 12 17:39:03.307801 systemd-networkd[1368]: cali923faba856a: Gained IPv6LL Sep 12 17:39:04.002064 containerd[1456]: time="2025-09-12T17:39:04.001992313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:04.005562 containerd[1456]: time="2025-09-12T17:39:04.003944041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:39:04.006806 containerd[1456]: time="2025-09-12T17:39:04.006651602Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:04.011151 containerd[1456]: time="2025-09-12T17:39:04.011050850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:04.012580 containerd[1456]: time="2025-09-12T17:39:04.012242930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.993093667s" Sep 12 17:39:04.012580 containerd[1456]: time="2025-09-12T17:39:04.012297136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:39:04.015870 containerd[1456]: time="2025-09-12T17:39:04.015568463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:39:04.018449 containerd[1456]: time="2025-09-12T17:39:04.018346065Z" level=info msg="CreateContainer within sandbox \"3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:39:04.044692 containerd[1456]: time="2025-09-12T17:39:04.044622408Z" level=info msg="CreateContainer within sandbox \"3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"706b6666ae38fe217656af412de3a7a22e92cddae0d8238fc8810af9684ee620\"" Sep 12 17:39:04.046238 containerd[1456]: time="2025-09-12T17:39:04.046029596Z" level=info msg="StartContainer for \"706b6666ae38fe217656af412de3a7a22e92cddae0d8238fc8810af9684ee620\"" Sep 12 17:39:04.114036 systemd[1]: Started cri-containerd-706b6666ae38fe217656af412de3a7a22e92cddae0d8238fc8810af9684ee620.scope - libcontainer container 706b6666ae38fe217656af412de3a7a22e92cddae0d8238fc8810af9684ee620. Sep 12 17:39:04.181132 containerd[1456]: time="2025-09-12T17:39:04.181064962Z" level=info msg="StartContainer for \"706b6666ae38fe217656af412de3a7a22e92cddae0d8238fc8810af9684ee620\" returns successfully" Sep 12 17:39:04.207769 kubelet[2605]: I0912 17:39:04.206881 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77ff955bcc-9474z" podStartSLOduration=28.74502993 podStartE2EDuration="35.206853005s" podCreationTimestamp="2025-09-12 17:38:29 +0000 UTC" firstStartedPulling="2025-09-12 17:38:57.552937539 +0000 UTC m=+45.076737300" lastFinishedPulling="2025-09-12 17:39:04.014760606 +0000 UTC m=+51.538560375" observedRunningTime="2025-09-12 17:39:04.20574583 +0000 UTC m=+51.729545613" watchObservedRunningTime="2025-09-12 17:39:04.206853005 +0000 UTC m=+51.730652778" Sep 12 17:39:05.855706 ntpd[1424]: Listen normally on 8 vxlan.calico 192.168.59.64:123 Sep 12 17:39:05.857061 ntpd[1424]: 12 Sep 17:39:05 ntpd[1424]: Listen normally on 8 vxlan.calico 192.168.59.64:123 Sep 12 17:39:05.857061 ntpd[1424]: 12 Sep 17:39:05 ntpd[1424]: Listen normally on 9 cali6757fc80113 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:39:05.857061 ntpd[1424]: 12 Sep 17:39:05 ntpd[1424]: Listen normally on 10 vxlan.calico [fe80::6475:96ff:fe82:835a%5]:123 Sep 12 17:39:05.857061 ntpd[1424]: 12 Sep 17:39:05 ntpd[1424]: Listen normally on 11 calif2a2a19aa13 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 12 17:39:05.857061 ntpd[1424]: 12 Sep 17:39:05 ntpd[1424]: Listen normally on 12 calia38192e9c60 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:39:05.857061 ntpd[1424]: 12 Sep 17:39:05 ntpd[1424]: Listen normally on 13 calic67dc804e6c [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:39:05.857061 ntpd[1424]: 12 Sep 17:39:05 ntpd[1424]: Listen normally on 14 calic2c0021bac7 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:39:05.857061 ntpd[1424]: 12 Sep 17:39:05 ntpd[1424]: Listen normally on 15 cali455a91e0b88 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:39:05.857061 ntpd[1424]: 12 Sep 17:39:05 ntpd[1424]: Listen normally on 16 cali923faba856a [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:39:05.857061 ntpd[1424]: 12 Sep 17:39:05 ntpd[1424]: Listen normally on 17 cali7360b0ee64a [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:39:05.855833 ntpd[1424]: Listen normally on 9 cali6757fc80113 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:39:05.855912 ntpd[1424]: Listen normally on 10 vxlan.calico [fe80::6475:96ff:fe82:835a%5]:123 Sep 12 17:39:05.855968 ntpd[1424]: Listen normally on 11 calif2a2a19aa13 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 12 17:39:05.856023 ntpd[1424]: Listen normally on 12 calia38192e9c60 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:39:05.856079 ntpd[1424]: Listen normally on 13 calic67dc804e6c [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:39:05.856132 ntpd[1424]: Listen normally on 14 calic2c0021bac7 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:39:05.856185 ntpd[1424]: Listen normally on 15 cali455a91e0b88 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:39:05.856237 ntpd[1424]: Listen normally on 16 cali923faba856a [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:39:05.856306 ntpd[1424]: Listen normally on 17 cali7360b0ee64a [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:39:06.642875 containerd[1456]: time="2025-09-12T17:39:06.642669477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:06.645026 containerd[1456]: time="2025-09-12T17:39:06.644970039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:39:06.646288 containerd[1456]: time="2025-09-12T17:39:06.646045997Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:06.650638 containerd[1456]: time="2025-09-12T17:39:06.650581117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:06.653536 containerd[1456]: time="2025-09-12T17:39:06.652714593Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.636783554s" Sep 12 17:39:06.653536 containerd[1456]: time="2025-09-12T17:39:06.652775680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:39:06.654861 containerd[1456]: time="2025-09-12T17:39:06.654823338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:39:06.659387 containerd[1456]: time="2025-09-12T17:39:06.658369278Z" level=info msg="CreateContainer within sandbox \"ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:39:06.699068 containerd[1456]: time="2025-09-12T17:39:06.698373437Z" level=info msg="CreateContainer within sandbox \"ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1993771c55620158f3a77b8b1a4f1aa6fd69a2ee33a638017411fdb6c63db9fb\"" Sep 12 17:39:06.700598 containerd[1456]: time="2025-09-12T17:39:06.700550183Z" level=info msg="StartContainer for \"1993771c55620158f3a77b8b1a4f1aa6fd69a2ee33a638017411fdb6c63db9fb\"" Sep 12 17:39:06.806675 systemd[1]: Started cri-containerd-1993771c55620158f3a77b8b1a4f1aa6fd69a2ee33a638017411fdb6c63db9fb.scope - libcontainer container 1993771c55620158f3a77b8b1a4f1aa6fd69a2ee33a638017411fdb6c63db9fb. Sep 12 17:39:06.933861 containerd[1456]: time="2025-09-12T17:39:06.933797930Z" level=info msg="StartContainer for \"1993771c55620158f3a77b8b1a4f1aa6fd69a2ee33a638017411fdb6c63db9fb\" returns successfully" Sep 12 17:39:10.180481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2391025382.mount: Deactivated successfully. Sep 12 17:39:11.662363 containerd[1456]: time="2025-09-12T17:39:11.662259294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:11.664451 containerd[1456]: time="2025-09-12T17:39:11.664380929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:39:11.666213 containerd[1456]: time="2025-09-12T17:39:11.666130171Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:11.669557 containerd[1456]: time="2025-09-12T17:39:11.669484093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:11.670752 containerd[1456]: time="2025-09-12T17:39:11.670589463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.015715158s" Sep 12 17:39:11.670752 containerd[1456]: time="2025-09-12T17:39:11.670638760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:39:11.673200 containerd[1456]: time="2025-09-12T17:39:11.672632860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:39:11.674563 containerd[1456]: time="2025-09-12T17:39:11.674404398Z" level=info msg="CreateContainer within sandbox \"7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:39:11.697670 containerd[1456]: time="2025-09-12T17:39:11.697590410Z" level=info msg="CreateContainer within sandbox \"7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"5af62a12bfd3cca63f01ebb4f6d46f0457622b1c4c279aef695bb429aea72929\"" Sep 12 17:39:11.706536 containerd[1456]: time="2025-09-12T17:39:11.703819647Z" level=info msg="StartContainer for \"5af62a12bfd3cca63f01ebb4f6d46f0457622b1c4c279aef695bb429aea72929\"" Sep 12 17:39:11.709334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95159911.mount: Deactivated successfully. Sep 12 17:39:11.774965 systemd[1]: Started cri-containerd-5af62a12bfd3cca63f01ebb4f6d46f0457622b1c4c279aef695bb429aea72929.scope - libcontainer container 5af62a12bfd3cca63f01ebb4f6d46f0457622b1c4c279aef695bb429aea72929. Sep 12 17:39:11.868778 containerd[1456]: time="2025-09-12T17:39:11.868687895Z" level=info msg="StartContainer for \"5af62a12bfd3cca63f01ebb4f6d46f0457622b1c4c279aef695bb429aea72929\" returns successfully" Sep 12 17:39:12.241223 kubelet[2605]: I0912 17:39:12.241146 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-q5wmn" podStartSLOduration=26.170796917 podStartE2EDuration="39.24110819s" podCreationTimestamp="2025-09-12 17:38:33 +0000 UTC" firstStartedPulling="2025-09-12 17:38:58.602078924 +0000 UTC m=+46.125878686" lastFinishedPulling="2025-09-12 17:39:11.672390131 +0000 UTC m=+59.196189959" observedRunningTime="2025-09-12 17:39:12.239346745 +0000 UTC m=+59.763146518" watchObservedRunningTime="2025-09-12 17:39:12.24110819 +0000 UTC m=+59.764907958" Sep 12 17:39:12.716254 containerd[1456]: time="2025-09-12T17:39:12.716191262Z" level=info msg="StopPodSandbox for \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\"" Sep 12 17:39:12.820574 containerd[1456]: 2025-09-12 17:39:12.764 [WARNING][5189] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"dab6dd14-c5b2-47a3-9f8b-31765591695a", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096", Pod:"coredns-7c65d6cfc9-425vx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2a2a19aa13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:12.820574 containerd[1456]: 2025-09-12 17:39:12.764 [INFO][5189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Sep 12 17:39:12.820574 containerd[1456]: 2025-09-12 17:39:12.764 [INFO][5189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" iface="eth0" netns="" Sep 12 17:39:12.820574 containerd[1456]: 2025-09-12 17:39:12.764 [INFO][5189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Sep 12 17:39:12.820574 containerd[1456]: 2025-09-12 17:39:12.764 [INFO][5189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Sep 12 17:39:12.820574 containerd[1456]: 2025-09-12 17:39:12.797 [INFO][5196] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" HandleID="k8s-pod-network.b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:39:12.820574 containerd[1456]: 2025-09-12 17:39:12.797 [INFO][5196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:12.820574 containerd[1456]: 2025-09-12 17:39:12.797 [INFO][5196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:12.820574 containerd[1456]: 2025-09-12 17:39:12.813 [WARNING][5196] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" HandleID="k8s-pod-network.b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:39:12.820574 containerd[1456]: 2025-09-12 17:39:12.813 [INFO][5196] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" HandleID="k8s-pod-network.b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:39:12.820574 containerd[1456]: 2025-09-12 17:39:12.815 [INFO][5196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:12.820574 containerd[1456]: 2025-09-12 17:39:12.818 [INFO][5189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Sep 12 17:39:12.820574 containerd[1456]: time="2025-09-12T17:39:12.820207791Z" level=info msg="TearDown network for sandbox \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\" successfully" Sep 12 17:39:12.820574 containerd[1456]: time="2025-09-12T17:39:12.820230345Z" level=info msg="StopPodSandbox for \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\" returns successfully" Sep 12 17:39:12.822219 containerd[1456]: time="2025-09-12T17:39:12.820783678Z" level=info msg="RemovePodSandbox for \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\"" Sep 12 17:39:12.822219 containerd[1456]: time="2025-09-12T17:39:12.820818545Z" level=info msg="Forcibly stopping sandbox \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\"" Sep 12 17:39:12.922733 containerd[1456]: 2025-09-12 17:39:12.874 [WARNING][5210] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"dab6dd14-c5b2-47a3-9f8b-31765591695a", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"f6edae25e082d86e9aff2af1c0086c5869db6e793045332ab2f7ca5453ce1096", Pod:"coredns-7c65d6cfc9-425vx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2a2a19aa13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:12.922733 containerd[1456]: 2025-09-12 17:39:12.874 [INFO][5210] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Sep 12 17:39:12.922733 containerd[1456]: 2025-09-12 17:39:12.874 [INFO][5210] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" iface="eth0" netns="" Sep 12 17:39:12.922733 containerd[1456]: 2025-09-12 17:39:12.874 [INFO][5210] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Sep 12 17:39:12.922733 containerd[1456]: 2025-09-12 17:39:12.874 [INFO][5210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Sep 12 17:39:12.922733 containerd[1456]: 2025-09-12 17:39:12.907 [INFO][5217] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" HandleID="k8s-pod-network.b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:39:12.922733 containerd[1456]: 2025-09-12 17:39:12.907 [INFO][5217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:12.922733 containerd[1456]: 2025-09-12 17:39:12.907 [INFO][5217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:12.922733 containerd[1456]: 2025-09-12 17:39:12.916 [WARNING][5217] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" HandleID="k8s-pod-network.b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:39:12.922733 containerd[1456]: 2025-09-12 17:39:12.916 [INFO][5217] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" HandleID="k8s-pod-network.b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--425vx-eth0" Sep 12 17:39:12.922733 containerd[1456]: 2025-09-12 17:39:12.918 [INFO][5217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:12.922733 containerd[1456]: 2025-09-12 17:39:12.921 [INFO][5210] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663" Sep 12 17:39:12.924384 containerd[1456]: time="2025-09-12T17:39:12.922772647Z" level=info msg="TearDown network for sandbox \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\" successfully" Sep 12 17:39:12.928618 containerd[1456]: time="2025-09-12T17:39:12.928553730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:39:12.928776 containerd[1456]: time="2025-09-12T17:39:12.928656781Z" level=info msg="RemovePodSandbox \"b3390e86c996a42bc1eaf37c1b32e40d5235ff0138c9d6a32fe8d6ef82d93663\" returns successfully" Sep 12 17:39:12.930416 containerd[1456]: time="2025-09-12T17:39:12.929481316Z" level=info msg="StopPodSandbox for \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\"" Sep 12 17:39:13.057364 containerd[1456]: 2025-09-12 17:39:12.989 [WARNING][5232] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3a3ff327-190a-4ebe-85de-f1209e1870ef", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196", Pod:"csi-node-driver-whntj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic67dc804e6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:13.057364 containerd[1456]: 2025-09-12 17:39:12.990 [INFO][5232] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Sep 12 17:39:13.057364 containerd[1456]: 2025-09-12 17:39:12.990 [INFO][5232] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" iface="eth0" netns="" Sep 12 17:39:13.057364 containerd[1456]: 2025-09-12 17:39:12.990 [INFO][5232] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Sep 12 17:39:13.057364 containerd[1456]: 2025-09-12 17:39:12.990 [INFO][5232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Sep 12 17:39:13.057364 containerd[1456]: 2025-09-12 17:39:13.036 [INFO][5239] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" HandleID="k8s-pod-network.4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:39:13.057364 containerd[1456]: 2025-09-12 17:39:13.036 [INFO][5239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:13.057364 containerd[1456]: 2025-09-12 17:39:13.036 [INFO][5239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:13.057364 containerd[1456]: 2025-09-12 17:39:13.048 [WARNING][5239] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" HandleID="k8s-pod-network.4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:39:13.057364 containerd[1456]: 2025-09-12 17:39:13.048 [INFO][5239] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" HandleID="k8s-pod-network.4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:39:13.057364 containerd[1456]: 2025-09-12 17:39:13.051 [INFO][5239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:13.057364 containerd[1456]: 2025-09-12 17:39:13.054 [INFO][5232] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Sep 12 17:39:13.058977 containerd[1456]: time="2025-09-12T17:39:13.058661650Z" level=info msg="TearDown network for sandbox \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\" successfully" Sep 12 17:39:13.058977 containerd[1456]: time="2025-09-12T17:39:13.058702781Z" level=info msg="StopPodSandbox for \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\" returns successfully" Sep 12 17:39:13.060301 containerd[1456]: time="2025-09-12T17:39:13.059850441Z" level=info msg="RemovePodSandbox for \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\"" Sep 12 17:39:13.060301 containerd[1456]: time="2025-09-12T17:39:13.059893765Z" level=info msg="Forcibly stopping sandbox \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\"" Sep 12 17:39:13.297831 containerd[1456]: 2025-09-12 17:39:13.187 [WARNING][5254] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3a3ff327-190a-4ebe-85de-f1209e1870ef", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196", Pod:"csi-node-driver-whntj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic67dc804e6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:13.297831 containerd[1456]: 2025-09-12 17:39:13.188 [INFO][5254] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Sep 12 17:39:13.297831 containerd[1456]: 2025-09-12 17:39:13.188 [INFO][5254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" iface="eth0" netns="" Sep 12 17:39:13.297831 containerd[1456]: 2025-09-12 17:39:13.188 [INFO][5254] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Sep 12 17:39:13.297831 containerd[1456]: 2025-09-12 17:39:13.188 [INFO][5254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Sep 12 17:39:13.297831 containerd[1456]: 2025-09-12 17:39:13.252 [INFO][5263] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" HandleID="k8s-pod-network.4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:39:13.297831 containerd[1456]: 2025-09-12 17:39:13.252 [INFO][5263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:13.297831 containerd[1456]: 2025-09-12 17:39:13.252 [INFO][5263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:13.297831 containerd[1456]: 2025-09-12 17:39:13.281 [WARNING][5263] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" HandleID="k8s-pod-network.4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:39:13.297831 containerd[1456]: 2025-09-12 17:39:13.281 [INFO][5263] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" HandleID="k8s-pod-network.4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-csi--node--driver--whntj-eth0" Sep 12 17:39:13.297831 containerd[1456]: 2025-09-12 17:39:13.286 [INFO][5263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:13.297831 containerd[1456]: 2025-09-12 17:39:13.293 [INFO][5254] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb" Sep 12 17:39:13.298766 containerd[1456]: time="2025-09-12T17:39:13.297878260Z" level=info msg="TearDown network for sandbox \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\" successfully" Sep 12 17:39:13.308361 containerd[1456]: time="2025-09-12T17:39:13.308212107Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:39:13.308361 containerd[1456]: time="2025-09-12T17:39:13.308308826Z" level=info msg="RemovePodSandbox \"4a5dba2399410d4f7529d3338d456f52cc41139ef6b21cd23b2faaa121f07ccb\" returns successfully" Sep 12 17:39:13.309561 containerd[1456]: time="2025-09-12T17:39:13.309089469Z" level=info msg="StopPodSandbox for \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\"" Sep 12 17:39:13.482962 containerd[1456]: 2025-09-12 17:39:13.394 [WARNING][5293] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6d036e19-1f2c-48c7-8f67-1bff02bce89d", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11", Pod:"goldmane-7988f88666-q5wmn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic2c0021bac7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:13.482962 containerd[1456]: 2025-09-12 17:39:13.394 [INFO][5293] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Sep 12 17:39:13.482962 containerd[1456]: 2025-09-12 17:39:13.394 [INFO][5293] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" iface="eth0" netns="" Sep 12 17:39:13.482962 containerd[1456]: 2025-09-12 17:39:13.395 [INFO][5293] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Sep 12 17:39:13.482962 containerd[1456]: 2025-09-12 17:39:13.395 [INFO][5293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Sep 12 17:39:13.482962 containerd[1456]: 2025-09-12 17:39:13.462 [INFO][5304] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" HandleID="k8s-pod-network.6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:39:13.482962 containerd[1456]: 2025-09-12 17:39:13.462 [INFO][5304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:13.482962 containerd[1456]: 2025-09-12 17:39:13.462 [INFO][5304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:13.482962 containerd[1456]: 2025-09-12 17:39:13.475 [WARNING][5304] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" HandleID="k8s-pod-network.6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:39:13.482962 containerd[1456]: 2025-09-12 17:39:13.476 [INFO][5304] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" HandleID="k8s-pod-network.6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:39:13.482962 containerd[1456]: 2025-09-12 17:39:13.478 [INFO][5304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:13.482962 containerd[1456]: 2025-09-12 17:39:13.480 [INFO][5293] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Sep 12 17:39:13.484107 containerd[1456]: time="2025-09-12T17:39:13.484069543Z" level=info msg="TearDown network for sandbox \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\" successfully" Sep 12 17:39:13.484205 containerd[1456]: time="2025-09-12T17:39:13.484188070Z" level=info msg="StopPodSandbox for \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\" returns successfully" Sep 12 17:39:13.485286 containerd[1456]: time="2025-09-12T17:39:13.484847111Z" level=info msg="RemovePodSandbox for \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\"" Sep 12 17:39:13.485286 containerd[1456]: time="2025-09-12T17:39:13.484889602Z" level=info msg="Forcibly stopping sandbox \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\"" Sep 12 17:39:13.625260 containerd[1456]: 2025-09-12 17:39:13.570 [WARNING][5319] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6d036e19-1f2c-48c7-8f67-1bff02bce89d", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"7c73e0f3216198e503d75be6cafb503466d43959d68444fc530bf4a240110a11", Pod:"goldmane-7988f88666-q5wmn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic2c0021bac7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:13.625260 containerd[1456]: 2025-09-12 17:39:13.571 [INFO][5319] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Sep 12 17:39:13.625260 containerd[1456]: 2025-09-12 17:39:13.571 [INFO][5319] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" iface="eth0" netns="" Sep 12 17:39:13.625260 containerd[1456]: 2025-09-12 17:39:13.571 [INFO][5319] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Sep 12 17:39:13.625260 containerd[1456]: 2025-09-12 17:39:13.571 [INFO][5319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Sep 12 17:39:13.625260 containerd[1456]: 2025-09-12 17:39:13.610 [INFO][5326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" HandleID="k8s-pod-network.6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:39:13.625260 containerd[1456]: 2025-09-12 17:39:13.610 [INFO][5326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:13.625260 containerd[1456]: 2025-09-12 17:39:13.610 [INFO][5326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:13.625260 containerd[1456]: 2025-09-12 17:39:13.619 [WARNING][5326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" HandleID="k8s-pod-network.6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:39:13.625260 containerd[1456]: 2025-09-12 17:39:13.619 [INFO][5326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" HandleID="k8s-pod-network.6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-goldmane--7988f88666--q5wmn-eth0" Sep 12 17:39:13.625260 containerd[1456]: 2025-09-12 17:39:13.621 [INFO][5326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:13.625260 containerd[1456]: 2025-09-12 17:39:13.623 [INFO][5319] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a" Sep 12 17:39:13.625260 containerd[1456]: time="2025-09-12T17:39:13.625229358Z" level=info msg="TearDown network for sandbox \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\" successfully" Sep 12 17:39:13.723551 containerd[1456]: time="2025-09-12T17:39:13.723473140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:39:13.724106 containerd[1456]: time="2025-09-12T17:39:13.723589424Z" level=info msg="RemovePodSandbox \"6c3e32192dde0e47b60d5d91140e765e035b4deb19b1983cfb8939c455ebd38a\" returns successfully" Sep 12 17:39:13.729716 containerd[1456]: time="2025-09-12T17:39:13.729287765Z" level=info msg="StopPodSandbox for \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\"" Sep 12 17:39:13.914351 containerd[1456]: 2025-09-12 17:39:13.821 [WARNING][5344] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0", GenerateName:"calico-kube-controllers-76b5f5566b-", Namespace:"calico-system", SelfLink:"", UID:"5dc1035e-b888-4db3-8f54-9136957333b5", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76b5f5566b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1", Pod:"calico-kube-controllers-76b5f5566b-zxw5v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali455a91e0b88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:13.914351 containerd[1456]: 2025-09-12 17:39:13.821 [INFO][5344] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Sep 12 17:39:13.914351 containerd[1456]: 2025-09-12 17:39:13.821 [INFO][5344] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" iface="eth0" netns="" Sep 12 17:39:13.914351 containerd[1456]: 2025-09-12 17:39:13.821 [INFO][5344] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Sep 12 17:39:13.914351 containerd[1456]: 2025-09-12 17:39:13.821 [INFO][5344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Sep 12 17:39:13.914351 containerd[1456]: 2025-09-12 17:39:13.887 [INFO][5351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" HandleID="k8s-pod-network.de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:39:13.914351 containerd[1456]: 2025-09-12 17:39:13.887 [INFO][5351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:13.914351 containerd[1456]: 2025-09-12 17:39:13.887 [INFO][5351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:13.914351 containerd[1456]: 2025-09-12 17:39:13.901 [WARNING][5351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" HandleID="k8s-pod-network.de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:39:13.914351 containerd[1456]: 2025-09-12 17:39:13.901 [INFO][5351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" HandleID="k8s-pod-network.de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:39:13.914351 containerd[1456]: 2025-09-12 17:39:13.904 [INFO][5351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:13.914351 containerd[1456]: 2025-09-12 17:39:13.909 [INFO][5344] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Sep 12 17:39:13.915774 containerd[1456]: time="2025-09-12T17:39:13.914402189Z" level=info msg="TearDown network for sandbox \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\" successfully" Sep 12 17:39:13.915774 containerd[1456]: time="2025-09-12T17:39:13.914440545Z" level=info msg="StopPodSandbox for \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\" returns successfully" Sep 12 17:39:13.915774 containerd[1456]: time="2025-09-12T17:39:13.915546782Z" level=info msg="RemovePodSandbox for \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\"" Sep 12 17:39:13.915774 containerd[1456]: time="2025-09-12T17:39:13.915588534Z" level=info msg="Forcibly stopping sandbox \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\"" Sep 12 17:39:14.080469 containerd[1456]: 2025-09-12 17:39:14.002 [WARNING][5365] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0", GenerateName:"calico-kube-controllers-76b5f5566b-", Namespace:"calico-system", SelfLink:"", UID:"5dc1035e-b888-4db3-8f54-9136957333b5", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76b5f5566b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1", Pod:"calico-kube-controllers-76b5f5566b-zxw5v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali455a91e0b88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:14.080469 containerd[1456]: 2025-09-12 17:39:14.002 [INFO][5365] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Sep 12 17:39:14.080469 containerd[1456]: 2025-09-12 17:39:14.002 [INFO][5365] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" iface="eth0" netns="" Sep 12 17:39:14.080469 containerd[1456]: 2025-09-12 17:39:14.003 [INFO][5365] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Sep 12 17:39:14.080469 containerd[1456]: 2025-09-12 17:39:14.003 [INFO][5365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Sep 12 17:39:14.080469 containerd[1456]: 2025-09-12 17:39:14.053 [INFO][5373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" HandleID="k8s-pod-network.de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:39:14.080469 containerd[1456]: 2025-09-12 17:39:14.053 [INFO][5373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:14.080469 containerd[1456]: 2025-09-12 17:39:14.053 [INFO][5373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:14.080469 containerd[1456]: 2025-09-12 17:39:14.069 [WARNING][5373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" HandleID="k8s-pod-network.de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:39:14.080469 containerd[1456]: 2025-09-12 17:39:14.069 [INFO][5373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" HandleID="k8s-pod-network.de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--kube--controllers--76b5f5566b--zxw5v-eth0" Sep 12 17:39:14.080469 containerd[1456]: 2025-09-12 17:39:14.072 [INFO][5373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:14.080469 containerd[1456]: 2025-09-12 17:39:14.076 [INFO][5365] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614" Sep 12 17:39:14.080469 containerd[1456]: time="2025-09-12T17:39:14.080132459Z" level=info msg="TearDown network for sandbox \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\" successfully" Sep 12 17:39:14.088336 containerd[1456]: time="2025-09-12T17:39:14.088285840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:39:14.088856 containerd[1456]: time="2025-09-12T17:39:14.088623385Z" level=info msg="RemovePodSandbox \"de2a1619859e4016d2fd59b7bf8cfbda3907636cec8e0de9b7ea9bb9e4986614\" returns successfully" Sep 12 17:39:14.089285 containerd[1456]: time="2025-09-12T17:39:14.089251895Z" level=info msg="StopPodSandbox for \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\"" Sep 12 17:39:14.371075 containerd[1456]: 2025-09-12 17:39:14.193 [WARNING][5387] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c9918ac5-51bf-44cc-9226-aa00d6fecc77", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0", Pod:"coredns-7c65d6cfc9-4768z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali923faba856a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:14.371075 containerd[1456]: 2025-09-12 17:39:14.193 [INFO][5387] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Sep 12 17:39:14.371075 containerd[1456]: 2025-09-12 17:39:14.193 [INFO][5387] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" iface="eth0" netns="" Sep 12 17:39:14.371075 containerd[1456]: 2025-09-12 17:39:14.193 [INFO][5387] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Sep 12 17:39:14.371075 containerd[1456]: 2025-09-12 17:39:14.193 [INFO][5387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Sep 12 17:39:14.371075 containerd[1456]: 2025-09-12 17:39:14.315 [INFO][5403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" HandleID="k8s-pod-network.4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:14.371075 containerd[1456]: 2025-09-12 17:39:14.316 [INFO][5403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:14.371075 containerd[1456]: 2025-09-12 17:39:14.316 [INFO][5403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:14.371075 containerd[1456]: 2025-09-12 17:39:14.349 [WARNING][5403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" HandleID="k8s-pod-network.4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:14.371075 containerd[1456]: 2025-09-12 17:39:14.350 [INFO][5403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" HandleID="k8s-pod-network.4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:14.371075 containerd[1456]: 2025-09-12 17:39:14.355 [INFO][5403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:14.371075 containerd[1456]: 2025-09-12 17:39:14.361 [INFO][5387] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Sep 12 17:39:14.374786 containerd[1456]: time="2025-09-12T17:39:14.371127330Z" level=info msg="TearDown network for sandbox \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\" successfully" Sep 12 17:39:14.374786 containerd[1456]: time="2025-09-12T17:39:14.371162692Z" level=info msg="StopPodSandbox for \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\" returns successfully" Sep 12 17:39:14.374786 containerd[1456]: time="2025-09-12T17:39:14.373284243Z" level=info msg="RemovePodSandbox for \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\"" Sep 12 17:39:14.374786 containerd[1456]: time="2025-09-12T17:39:14.373328252Z" level=info msg="Forcibly stopping sandbox \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\"" Sep 12 17:39:14.705591 containerd[1456]: 2025-09-12 17:39:14.552 [WARNING][5425] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c9918ac5-51bf-44cc-9226-aa00d6fecc77", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"5f0344c87bb37dc139ac9b680610e804d687d49879c7644f7bf86d76f893f5c0", Pod:"coredns-7c65d6cfc9-4768z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali923faba856a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:14.705591 containerd[1456]: 2025-09-12 17:39:14.553 [INFO][5425] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Sep 12 17:39:14.705591 containerd[1456]: 2025-09-12 17:39:14.554 [INFO][5425] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" iface="eth0" netns="" Sep 12 17:39:14.705591 containerd[1456]: 2025-09-12 17:39:14.554 [INFO][5425] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Sep 12 17:39:14.705591 containerd[1456]: 2025-09-12 17:39:14.554 [INFO][5425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Sep 12 17:39:14.705591 containerd[1456]: 2025-09-12 17:39:14.631 [INFO][5436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" HandleID="k8s-pod-network.4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:14.705591 containerd[1456]: 2025-09-12 17:39:14.635 [INFO][5436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:14.705591 containerd[1456]: 2025-09-12 17:39:14.635 [INFO][5436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:14.705591 containerd[1456]: 2025-09-12 17:39:14.667 [WARNING][5436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" HandleID="k8s-pod-network.4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:14.705591 containerd[1456]: 2025-09-12 17:39:14.667 [INFO][5436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" HandleID="k8s-pod-network.4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--4768z-eth0" Sep 12 17:39:14.705591 containerd[1456]: 2025-09-12 17:39:14.693 [INFO][5436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:14.705591 containerd[1456]: 2025-09-12 17:39:14.699 [INFO][5425] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc" Sep 12 17:39:14.706603 containerd[1456]: time="2025-09-12T17:39:14.705603578Z" level=info msg="TearDown network for sandbox \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\" successfully" Sep 12 17:39:14.715329 containerd[1456]: time="2025-09-12T17:39:14.715271900Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:39:14.715546 containerd[1456]: time="2025-09-12T17:39:14.715365939Z" level=info msg="RemovePodSandbox \"4ece88664857a2589647cf3d517c25e36db7b63afbc5aaa1f191f07901eb95bc\" returns successfully" Sep 12 17:39:14.717048 containerd[1456]: time="2025-09-12T17:39:14.717008654Z" level=info msg="StopPodSandbox for \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\"" Sep 12 17:39:14.996662 containerd[1456]: 2025-09-12 17:39:14.871 [WARNING][5451] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0", GenerateName:"calico-apiserver-77ff955bcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"54a13898-563c-4fc8-9cbb-281683965c07", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77ff955bcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac", Pod:"calico-apiserver-77ff955bcc-9474z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia38192e9c60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:14.996662 containerd[1456]: 2025-09-12 17:39:14.872 [INFO][5451] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Sep 12 17:39:14.996662 containerd[1456]: 2025-09-12 17:39:14.872 [INFO][5451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" iface="eth0" netns="" Sep 12 17:39:14.996662 containerd[1456]: 2025-09-12 17:39:14.872 [INFO][5451] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Sep 12 17:39:14.996662 containerd[1456]: 2025-09-12 17:39:14.872 [INFO][5451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Sep 12 17:39:14.996662 containerd[1456]: 2025-09-12 17:39:14.957 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" HandleID="k8s-pod-network.829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:39:14.996662 containerd[1456]: 2025-09-12 17:39:14.961 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:14.996662 containerd[1456]: 2025-09-12 17:39:14.961 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:14.996662 containerd[1456]: 2025-09-12 17:39:14.981 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" HandleID="k8s-pod-network.829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:39:14.996662 containerd[1456]: 2025-09-12 17:39:14.982 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" HandleID="k8s-pod-network.829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:39:14.996662 containerd[1456]: 2025-09-12 17:39:14.986 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:14.996662 containerd[1456]: 2025-09-12 17:39:14.991 [INFO][5451] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Sep 12 17:39:14.996662 containerd[1456]: time="2025-09-12T17:39:14.996617650Z" level=info msg="TearDown network for sandbox \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\" successfully" Sep 12 17:39:14.996662 containerd[1456]: time="2025-09-12T17:39:14.996654490Z" level=info msg="StopPodSandbox for \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\" returns successfully" Sep 12 17:39:14.998291 containerd[1456]: time="2025-09-12T17:39:14.997245141Z" level=info msg="RemovePodSandbox for \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\"" Sep 12 17:39:14.998291 containerd[1456]: time="2025-09-12T17:39:14.997283941Z" level=info msg="Forcibly stopping sandbox \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\"" Sep 12 17:39:15.222669 containerd[1456]: 2025-09-12 17:39:15.110 [WARNING][5474] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0", GenerateName:"calico-apiserver-77ff955bcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"54a13898-563c-4fc8-9cbb-281683965c07", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77ff955bcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"3e60137736239542a0097165fc65eac8d2297c9fc2cf42c471b208a934845dac", Pod:"calico-apiserver-77ff955bcc-9474z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia38192e9c60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:15.222669 containerd[1456]: 2025-09-12 17:39:15.112 [INFO][5474] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Sep 12 17:39:15.222669 containerd[1456]: 2025-09-12 17:39:15.112 [INFO][5474] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" iface="eth0" netns="" Sep 12 17:39:15.222669 containerd[1456]: 2025-09-12 17:39:15.112 [INFO][5474] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Sep 12 17:39:15.222669 containerd[1456]: 2025-09-12 17:39:15.112 [INFO][5474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Sep 12 17:39:15.222669 containerd[1456]: 2025-09-12 17:39:15.184 [INFO][5482] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" HandleID="k8s-pod-network.829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:39:15.222669 containerd[1456]: 2025-09-12 17:39:15.184 [INFO][5482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:15.222669 containerd[1456]: 2025-09-12 17:39:15.184 [INFO][5482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:15.222669 containerd[1456]: 2025-09-12 17:39:15.202 [WARNING][5482] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" HandleID="k8s-pod-network.829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:39:15.222669 containerd[1456]: 2025-09-12 17:39:15.202 [INFO][5482] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" HandleID="k8s-pod-network.829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--9474z-eth0" Sep 12 17:39:15.222669 containerd[1456]: 2025-09-12 17:39:15.205 [INFO][5482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:15.222669 containerd[1456]: 2025-09-12 17:39:15.213 [INFO][5474] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf" Sep 12 17:39:15.222669 containerd[1456]: time="2025-09-12T17:39:15.221095293Z" level=info msg="TearDown network for sandbox \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\" successfully" Sep 12 17:39:15.233574 containerd[1456]: time="2025-09-12T17:39:15.233406730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:39:15.234199 containerd[1456]: time="2025-09-12T17:39:15.234018186Z" level=info msg="RemovePodSandbox \"829a7653216eccbb06eaeab788efbd0a3deb5d197f2969647adecf9478b784cf\" returns successfully" Sep 12 17:39:15.235844 containerd[1456]: time="2025-09-12T17:39:15.235629615Z" level=info msg="StopPodSandbox for \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\"" Sep 12 17:39:15.516874 containerd[1456]: 2025-09-12 17:39:15.369 [WARNING][5496] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0", GenerateName:"calico-apiserver-77ff955bcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"a81aa3ac-9352-4654-98c8-0683fd08fedb", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77ff955bcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0", Pod:"calico-apiserver-77ff955bcc-6mgcv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7360b0ee64a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:15.516874 containerd[1456]: 2025-09-12 17:39:15.373 [INFO][5496] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Sep 12 17:39:15.516874 containerd[1456]: 2025-09-12 17:39:15.373 [INFO][5496] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" iface="eth0" netns="" Sep 12 17:39:15.516874 containerd[1456]: 2025-09-12 17:39:15.373 [INFO][5496] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Sep 12 17:39:15.516874 containerd[1456]: 2025-09-12 17:39:15.373 [INFO][5496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Sep 12 17:39:15.516874 containerd[1456]: 2025-09-12 17:39:15.472 [INFO][5504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" HandleID="k8s-pod-network.d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:15.516874 containerd[1456]: 2025-09-12 17:39:15.473 [INFO][5504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:15.516874 containerd[1456]: 2025-09-12 17:39:15.473 [INFO][5504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:15.516874 containerd[1456]: 2025-09-12 17:39:15.490 [WARNING][5504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" HandleID="k8s-pod-network.d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:15.516874 containerd[1456]: 2025-09-12 17:39:15.490 [INFO][5504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" HandleID="k8s-pod-network.d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:15.516874 containerd[1456]: 2025-09-12 17:39:15.496 [INFO][5504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:15.516874 containerd[1456]: 2025-09-12 17:39:15.504 [INFO][5496] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Sep 12 17:39:15.518923 containerd[1456]: time="2025-09-12T17:39:15.516939241Z" level=info msg="TearDown network for sandbox \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\" successfully" Sep 12 17:39:15.518923 containerd[1456]: time="2025-09-12T17:39:15.516997197Z" level=info msg="StopPodSandbox for \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\" returns successfully" Sep 12 17:39:15.519691 containerd[1456]: time="2025-09-12T17:39:15.519317839Z" level=info msg="RemovePodSandbox for \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\"" Sep 12 17:39:15.519691 containerd[1456]: time="2025-09-12T17:39:15.519472722Z" level=info msg="Forcibly stopping sandbox \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\"" Sep 12 17:39:15.802782 containerd[1456]: 2025-09-12 17:39:15.685 [WARNING][5518] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0", GenerateName:"calico-apiserver-77ff955bcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"a81aa3ac-9352-4654-98c8-0683fd08fedb", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77ff955bcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-af7634002b537b8a2605.c.flatcar-212911.internal", ContainerID:"94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0", Pod:"calico-apiserver-77ff955bcc-6mgcv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7360b0ee64a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:39:15.802782 containerd[1456]: 2025-09-12 17:39:15.685 [INFO][5518] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Sep 12 17:39:15.802782 containerd[1456]: 2025-09-12 17:39:15.685 [INFO][5518] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" iface="eth0" netns="" Sep 12 17:39:15.802782 containerd[1456]: 2025-09-12 17:39:15.685 [INFO][5518] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Sep 12 17:39:15.802782 containerd[1456]: 2025-09-12 17:39:15.686 [INFO][5518] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Sep 12 17:39:15.802782 containerd[1456]: 2025-09-12 17:39:15.762 [INFO][5525] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" HandleID="k8s-pod-network.d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:15.802782 containerd[1456]: 2025-09-12 17:39:15.762 [INFO][5525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:15.802782 containerd[1456]: 2025-09-12 17:39:15.762 [INFO][5525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:15.802782 containerd[1456]: 2025-09-12 17:39:15.782 [WARNING][5525] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" HandleID="k8s-pod-network.d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:15.802782 containerd[1456]: 2025-09-12 17:39:15.782 [INFO][5525] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" HandleID="k8s-pod-network.d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-calico--apiserver--77ff955bcc--6mgcv-eth0" Sep 12 17:39:15.802782 containerd[1456]: 2025-09-12 17:39:15.787 [INFO][5525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:15.802782 containerd[1456]: 2025-09-12 17:39:15.793 [INFO][5518] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d" Sep 12 17:39:15.803692 containerd[1456]: time="2025-09-12T17:39:15.802750550Z" level=info msg="TearDown network for sandbox \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\" successfully" Sep 12 17:39:15.819917 containerd[1456]: time="2025-09-12T17:39:15.818527427Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:39:15.819917 containerd[1456]: time="2025-09-12T17:39:15.818954515Z" level=info msg="RemovePodSandbox \"d6d97a9910c946a4903e1252ed879d3a8b8dc76ecfc2473e310ea96d4d650a5d\" returns successfully" Sep 12 17:39:15.820209 containerd[1456]: time="2025-09-12T17:39:15.820174802Z" level=info msg="StopPodSandbox for \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\"" Sep 12 17:39:16.159925 containerd[1456]: 2025-09-12 17:39:16.003 [WARNING][5539] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--5bfdb4c46c--msrzz-eth0" Sep 12 17:39:16.159925 containerd[1456]: 2025-09-12 17:39:16.003 [INFO][5539] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Sep 12 17:39:16.159925 containerd[1456]: 2025-09-12 17:39:16.004 [INFO][5539] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" iface="eth0" netns="" Sep 12 17:39:16.159925 containerd[1456]: 2025-09-12 17:39:16.004 [INFO][5539] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Sep 12 17:39:16.159925 containerd[1456]: 2025-09-12 17:39:16.004 [INFO][5539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Sep 12 17:39:16.159925 containerd[1456]: 2025-09-12 17:39:16.116 [INFO][5547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" HandleID="k8s-pod-network.115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--5bfdb4c46c--msrzz-eth0" Sep 12 17:39:16.159925 containerd[1456]: 2025-09-12 17:39:16.117 [INFO][5547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:16.159925 containerd[1456]: 2025-09-12 17:39:16.117 [INFO][5547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:16.159925 containerd[1456]: 2025-09-12 17:39:16.142 [WARNING][5547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" HandleID="k8s-pod-network.115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--5bfdb4c46c--msrzz-eth0" Sep 12 17:39:16.159925 containerd[1456]: 2025-09-12 17:39:16.143 [INFO][5547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" HandleID="k8s-pod-network.115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--5bfdb4c46c--msrzz-eth0" Sep 12 17:39:16.159925 containerd[1456]: 2025-09-12 17:39:16.147 [INFO][5547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:16.159925 containerd[1456]: 2025-09-12 17:39:16.152 [INFO][5539] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Sep 12 17:39:16.159925 containerd[1456]: time="2025-09-12T17:39:16.159519798Z" level=info msg="TearDown network for sandbox \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\" successfully" Sep 12 17:39:16.159925 containerd[1456]: time="2025-09-12T17:39:16.159556219Z" level=info msg="StopPodSandbox for \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\" returns successfully" Sep 12 17:39:16.163249 containerd[1456]: time="2025-09-12T17:39:16.162255508Z" level=info msg="RemovePodSandbox for \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\"" Sep 12 17:39:16.163249 containerd[1456]: time="2025-09-12T17:39:16.162303185Z" level=info msg="Forcibly stopping sandbox \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\"" Sep 12 17:39:16.419625 containerd[1456]: 2025-09-12 17:39:16.320 [WARNING][5561] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" WorkloadEndpoint="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--5bfdb4c46c--msrzz-eth0" Sep 12 17:39:16.419625 containerd[1456]: 2025-09-12 17:39:16.321 [INFO][5561] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Sep 12 17:39:16.419625 containerd[1456]: 2025-09-12 17:39:16.321 [INFO][5561] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" iface="eth0" netns="" Sep 12 17:39:16.419625 containerd[1456]: 2025-09-12 17:39:16.321 [INFO][5561] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Sep 12 17:39:16.419625 containerd[1456]: 2025-09-12 17:39:16.321 [INFO][5561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Sep 12 17:39:16.419625 containerd[1456]: 2025-09-12 17:39:16.391 [INFO][5568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" HandleID="k8s-pod-network.115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--5bfdb4c46c--msrzz-eth0" Sep 12 17:39:16.419625 containerd[1456]: 2025-09-12 17:39:16.392 [INFO][5568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:39:16.419625 containerd[1456]: 2025-09-12 17:39:16.392 [INFO][5568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:39:16.419625 containerd[1456]: 2025-09-12 17:39:16.405 [WARNING][5568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" HandleID="k8s-pod-network.115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--5bfdb4c46c--msrzz-eth0" Sep 12 17:39:16.419625 containerd[1456]: 2025-09-12 17:39:16.405 [INFO][5568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" HandleID="k8s-pod-network.115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Workload="ci--4081--3--6--af7634002b537b8a2605.c.flatcar--212911.internal-k8s-whisker--5bfdb4c46c--msrzz-eth0" Sep 12 17:39:16.419625 containerd[1456]: 2025-09-12 17:39:16.407 [INFO][5568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:39:16.419625 containerd[1456]: 2025-09-12 17:39:16.410 [INFO][5561] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230" Sep 12 17:39:16.419625 containerd[1456]: time="2025-09-12T17:39:16.418600211Z" level=info msg="TearDown network for sandbox \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\" successfully" Sep 12 17:39:16.432904 containerd[1456]: time="2025-09-12T17:39:16.432594812Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:39:16.432904 containerd[1456]: time="2025-09-12T17:39:16.432709753Z" level=info msg="RemovePodSandbox \"115171563d7acdad83d23c73a92406893aad1ffb690aa4aed85c5ec78d670230\" returns successfully" Sep 12 17:39:16.808154 containerd[1456]: time="2025-09-12T17:39:16.806918082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:16.810056 containerd[1456]: time="2025-09-12T17:39:16.809997661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:39:16.812203 containerd[1456]: time="2025-09-12T17:39:16.812162453Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:16.817690 containerd[1456]: time="2025-09-12T17:39:16.817636022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:16.818678 containerd[1456]: time="2025-09-12T17:39:16.818633074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.145959806s" Sep 12 17:39:16.818795 containerd[1456]: time="2025-09-12T17:39:16.818685884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:39:16.824765 containerd[1456]: time="2025-09-12T17:39:16.824724172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:39:16.842295 containerd[1456]: time="2025-09-12T17:39:16.842235364Z" level=info msg="CreateContainer within sandbox \"3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:39:16.868051 containerd[1456]: time="2025-09-12T17:39:16.867991599Z" level=info msg="CreateContainer within sandbox \"3ec59285dd7098d4bd48a8364c026f8e127c9a041c2e525bee6a9251e1c03cc1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"31ed6019576fac17d4b17c0a2f623c355fcacff56b85689691b0625b1aef7db7\"" Sep 12 17:39:16.871569 containerd[1456]: time="2025-09-12T17:39:16.869881197Z" level=info msg="StartContainer for \"31ed6019576fac17d4b17c0a2f623c355fcacff56b85689691b0625b1aef7db7\"" Sep 12 17:39:16.945797 systemd[1]: Started cri-containerd-31ed6019576fac17d4b17c0a2f623c355fcacff56b85689691b0625b1aef7db7.scope - libcontainer container 31ed6019576fac17d4b17c0a2f623c355fcacff56b85689691b0625b1aef7db7. Sep 12 17:39:17.107694 containerd[1456]: time="2025-09-12T17:39:17.107296627Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:17.112534 containerd[1456]: time="2025-09-12T17:39:17.111389780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:39:17.116358 containerd[1456]: time="2025-09-12T17:39:17.116286479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 291.509756ms" Sep 12 17:39:17.116605 containerd[1456]: time="2025-09-12T17:39:17.116361178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:39:17.117699 containerd[1456]: time="2025-09-12T17:39:17.117664540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:39:17.121198 containerd[1456]: time="2025-09-12T17:39:17.121159170Z" level=info msg="CreateContainer within sandbox \"94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:39:17.153193 containerd[1456]: time="2025-09-12T17:39:17.153123603Z" level=info msg="CreateContainer within sandbox \"94a8477c00d30ae8254db162832eee7c86d7aaeca0889976e14856fe44cfbfe0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8412d569c8b55d9f907d8e53544a6bd5329cfd0892a9373afc6ab8f82746c984\"" Sep 12 17:39:17.155807 containerd[1456]: time="2025-09-12T17:39:17.155732567Z" level=info msg="StartContainer for \"8412d569c8b55d9f907d8e53544a6bd5329cfd0892a9373afc6ab8f82746c984\"" Sep 12 17:39:17.231030 systemd[1]: Started cri-containerd-8412d569c8b55d9f907d8e53544a6bd5329cfd0892a9373afc6ab8f82746c984.scope - libcontainer container 8412d569c8b55d9f907d8e53544a6bd5329cfd0892a9373afc6ab8f82746c984. Sep 12 17:39:17.409553 containerd[1456]: time="2025-09-12T17:39:17.409130449Z" level=info msg="StartContainer for \"31ed6019576fac17d4b17c0a2f623c355fcacff56b85689691b0625b1aef7db7\" returns successfully" Sep 12 17:39:17.432041 containerd[1456]: time="2025-09-12T17:39:17.431954410Z" level=info msg="StartContainer for \"8412d569c8b55d9f907d8e53544a6bd5329cfd0892a9373afc6ab8f82746c984\" returns successfully" Sep 12 17:39:18.349987 kubelet[2605]: I0912 17:39:18.349588 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77ff955bcc-6mgcv" podStartSLOduration=34.293849637 podStartE2EDuration="49.349562599s" podCreationTimestamp="2025-09-12 17:38:29 +0000 UTC" firstStartedPulling="2025-09-12 17:39:02.061656596 +0000 UTC m=+49.585456355" lastFinishedPulling="2025-09-12 17:39:17.117369555 +0000 UTC m=+64.641169317" observedRunningTime="2025-09-12 17:39:18.344766574 +0000 UTC m=+65.868566360" watchObservedRunningTime="2025-09-12 17:39:18.349562599 +0000 UTC m=+65.873362366" Sep 12 17:39:18.374443 kubelet[2605]: I0912 17:39:18.373830 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76b5f5566b-zxw5v" podStartSLOduration=27.028444556 podStartE2EDuration="44.37380051s" podCreationTimestamp="2025-09-12 17:38:34 +0000 UTC" firstStartedPulling="2025-09-12 17:38:59.476123768 +0000 UTC m=+46.999923527" lastFinishedPulling="2025-09-12 17:39:16.821479718 +0000 UTC m=+64.345279481" observedRunningTime="2025-09-12 17:39:18.371323164 +0000 UTC m=+65.895122962" watchObservedRunningTime="2025-09-12 17:39:18.37380051 +0000 UTC m=+65.897600285" Sep 12 17:39:18.425416 systemd[1]: run-containerd-runc-k8s.io-31ed6019576fac17d4b17c0a2f623c355fcacff56b85689691b0625b1aef7db7-runc.JV44f5.mount: Deactivated successfully. Sep 12 17:39:19.039530 containerd[1456]: time="2025-09-12T17:39:19.038337544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:19.042373 containerd[1456]: time="2025-09-12T17:39:19.042311025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:39:19.046538 containerd[1456]: time="2025-09-12T17:39:19.045371890Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:19.049357 containerd[1456]: time="2025-09-12T17:39:19.049314612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:19.051008 containerd[1456]: time="2025-09-12T17:39:19.050964795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.933250288s" Sep 12 17:39:19.051180 containerd[1456]: time="2025-09-12T17:39:19.051154303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:39:19.056797 containerd[1456]: time="2025-09-12T17:39:19.056730207Z" level=info msg="CreateContainer within sandbox \"ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:39:19.081118 containerd[1456]: time="2025-09-12T17:39:19.081067520Z" level=info msg="CreateContainer within sandbox \"ecfdf59d44a0f11a733faaa1da16f19c5100e1dd7125abafde68153cdc4c8196\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"13e279da04c611f584bc3b6f657215b92d6200c1c4356d19642436bda21d1dd8\"" Sep 12 17:39:19.082225 containerd[1456]: time="2025-09-12T17:39:19.082175290Z" level=info msg="StartContainer for \"13e279da04c611f584bc3b6f657215b92d6200c1c4356d19642436bda21d1dd8\"" Sep 12 17:39:19.160759 systemd[1]: Started cri-containerd-13e279da04c611f584bc3b6f657215b92d6200c1c4356d19642436bda21d1dd8.scope - libcontainer container 13e279da04c611f584bc3b6f657215b92d6200c1c4356d19642436bda21d1dd8. Sep 12 17:39:19.389714 containerd[1456]: time="2025-09-12T17:39:19.389465042Z" level=info msg="StartContainer for \"13e279da04c611f584bc3b6f657215b92d6200c1c4356d19642436bda21d1dd8\" returns successfully" Sep 12 17:39:19.913813 kubelet[2605]: I0912 17:39:19.913769 2605 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:39:19.913813 kubelet[2605]: I0912 17:39:19.913823 2605 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:39:20.339729 kubelet[2605]: I0912 17:39:20.339311 2605 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:39:20.934517 kubelet[2605]: I0912 17:39:20.932810 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-whntj" podStartSLOduration=26.370148335 podStartE2EDuration="46.932782819s" podCreationTimestamp="2025-09-12 17:38:34 +0000 UTC" firstStartedPulling="2025-09-12 17:38:58.489985281 +0000 UTC m=+46.013785046" lastFinishedPulling="2025-09-12 17:39:19.052619759 +0000 UTC m=+66.576419530" observedRunningTime="2025-09-12 17:39:20.36008671 +0000 UTC m=+67.883886485" watchObservedRunningTime="2025-09-12 17:39:20.932782819 +0000 UTC m=+68.456582594" Sep 12 17:40:14.203157 systemd[1]: run-containerd-runc-k8s.io-5af62a12bfd3cca63f01ebb4f6d46f0457622b1c4c279aef695bb429aea72929-runc.qG0Cgz.mount: Deactivated successfully. Sep 12 17:40:37.901955 systemd[1]: Started sshd@9-10.128.0.49:22-45.119.86.55:58564.service - OpenSSH per-connection server daemon (45.119.86.55:58564). Sep 12 17:40:38.669978 sshd[5967]: Connection closed by 45.119.86.55 port 58564 Sep 12 17:40:38.671034 systemd[1]: sshd@9-10.128.0.49:22-45.119.86.55:58564.service: Deactivated successfully. Sep 12 17:40:44.204094 systemd[1]: run-containerd-runc-k8s.io-5af62a12bfd3cca63f01ebb4f6d46f0457622b1c4c279aef695bb429aea72929-runc.Cr25rC.mount: Deactivated successfully. Sep 12 17:40:54.555983 systemd[1]: Started sshd@10-10.128.0.49:22-45.119.86.55:45432.service - OpenSSH per-connection server daemon (45.119.86.55:45432). Sep 12 17:41:00.662995 sshd[6032]: Connection closed by authenticating user root 45.119.86.55 port 45432 [preauth] Sep 12 17:41:00.666335 systemd[1]: sshd@10-10.128.0.49:22-45.119.86.55:45432.service: Deactivated successfully. Sep 12 17:41:41.779943 systemd[1]: run-containerd-runc-k8s.io-19130fc3f580e4aadb0034882851c5a3509d6e3cfb76203f053e7b0c5133b499-runc.a8uxVg.mount: Deactivated successfully. Sep 12 17:41:44.205933 systemd[1]: run-containerd-runc-k8s.io-5af62a12bfd3cca63f01ebb4f6d46f0457622b1c4c279aef695bb429aea72929-runc.RlkrjF.mount: Deactivated successfully. Sep 12 17:41:49.089968 systemd[1]: Started sshd@11-10.128.0.49:22-185.156.73.233:41558.service - OpenSSH per-connection server daemon (185.156.73.233:41558). Sep 12 17:41:50.473283 sshd[6214]: Connection closed by authenticating user root 185.156.73.233 port 41558 [preauth] Sep 12 17:41:50.476655 systemd[1]: sshd@11-10.128.0.49:22-185.156.73.233:41558.service: Deactivated successfully. Sep 12 17:42:01.923797 systemd[1]: run-containerd-runc-k8s.io-31ed6019576fac17d4b17c0a2f623c355fcacff56b85689691b0625b1aef7db7-runc.ZtZ7mg.mount: Deactivated successfully. Sep 12 17:42:14.211251 systemd[1]: run-containerd-runc-k8s.io-5af62a12bfd3cca63f01ebb4f6d46f0457622b1c4c279aef695bb429aea72929-runc.ZmpkfY.mount: Deactivated successfully. Sep 12 17:42:57.855884 ntpd[1424]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:42:57.858938 ntpd[1424]: 12 Sep 17:42:57 ntpd[1424]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:43:14.137150 systemd[1]: run-containerd-runc-k8s.io-31ed6019576fac17d4b17c0a2f623c355fcacff56b85689691b0625b1aef7db7-runc.Ma2bZ0.mount: Deactivated successfully. Sep 12 17:43:44.202903 systemd[1]: run-containerd-runc-k8s.io-5af62a12bfd3cca63f01ebb4f6d46f0457622b1c4c279aef695bb429aea72929-runc.viK48o.mount: Deactivated successfully. Sep 12 17:44:22.002102 systemd[1]: Started sshd@12-10.128.0.49:22-139.178.89.65:45490.service - OpenSSH per-connection server daemon (139.178.89.65:45490). Sep 12 17:44:22.393010 sshd[6704]: Accepted publickey for core from 139.178.89.65 port 45490 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:44:22.395389 sshd[6704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:22.402165 systemd-logind[1436]: New session 10 of user core. Sep 12 17:44:22.407900 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:44:22.771048 sshd[6704]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:22.777956 systemd[1]: sshd@12-10.128.0.49:22-139.178.89.65:45490.service: Deactivated successfully. Sep 12 17:44:22.781937 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:44:22.783317 systemd-logind[1436]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:44:22.785007 systemd-logind[1436]: Removed session 10. Sep 12 17:44:27.847032 systemd[1]: Started sshd@13-10.128.0.49:22-139.178.89.65:45498.service - OpenSSH per-connection server daemon (139.178.89.65:45498). Sep 12 17:44:28.259115 sshd[6742]: Accepted publickey for core from 139.178.89.65 port 45498 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:44:28.260220 sshd[6742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:28.267773 systemd-logind[1436]: New session 11 of user core. Sep 12 17:44:28.277223 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:44:28.662336 sshd[6742]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:28.676559 systemd[1]: sshd@13-10.128.0.49:22-139.178.89.65:45498.service: Deactivated successfully. Sep 12 17:44:28.682497 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:44:28.684766 systemd-logind[1436]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:44:28.686948 systemd-logind[1436]: Removed session 11. Sep 12 17:44:33.732936 systemd[1]: Started sshd@14-10.128.0.49:22-139.178.89.65:52054.service - OpenSSH per-connection server daemon (139.178.89.65:52054). Sep 12 17:44:34.116634 sshd[6757]: Accepted publickey for core from 139.178.89.65 port 52054 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:44:34.118951 sshd[6757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:34.126611 systemd-logind[1436]: New session 12 of user core. Sep 12 17:44:34.133837 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:44:34.469195 sshd[6757]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:34.474185 systemd[1]: sshd@14-10.128.0.49:22-139.178.89.65:52054.service: Deactivated successfully. Sep 12 17:44:34.477128 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:44:34.479396 systemd-logind[1436]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:44:34.481669 systemd-logind[1436]: Removed session 12. Sep 12 17:44:34.541021 systemd[1]: Started sshd@15-10.128.0.49:22-139.178.89.65:52070.service - OpenSSH per-connection server daemon (139.178.89.65:52070). Sep 12 17:44:34.916113 sshd[6770]: Accepted publickey for core from 139.178.89.65 port 52070 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:44:34.916997 sshd[6770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:34.923614 systemd-logind[1436]: New session 13 of user core. Sep 12 17:44:34.932766 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:44:35.301097 sshd[6770]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:35.306769 systemd[1]: sshd@15-10.128.0.49:22-139.178.89.65:52070.service: Deactivated successfully. Sep 12 17:44:35.310525 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:44:35.311945 systemd-logind[1436]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:44:35.314064 systemd-logind[1436]: Removed session 13. Sep 12 17:44:35.372981 systemd[1]: Started sshd@16-10.128.0.49:22-139.178.89.65:52072.service - OpenSSH per-connection server daemon (139.178.89.65:52072). Sep 12 17:44:35.762204 sshd[6781]: Accepted publickey for core from 139.178.89.65 port 52072 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:44:35.764556 sshd[6781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:35.772883 systemd-logind[1436]: New session 14 of user core. Sep 12 17:44:35.778886 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:44:36.120802 sshd[6781]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:36.127922 systemd[1]: sshd@16-10.128.0.49:22-139.178.89.65:52072.service: Deactivated successfully. Sep 12 17:44:36.131059 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:44:36.132304 systemd-logind[1436]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:44:36.133930 systemd-logind[1436]: Removed session 14. Sep 12 17:44:41.193534 systemd[1]: Started sshd@17-10.128.0.49:22-139.178.89.65:43994.service - OpenSSH per-connection server daemon (139.178.89.65:43994). Sep 12 17:44:41.572718 sshd[6800]: Accepted publickey for core from 139.178.89.65 port 43994 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:44:41.575217 sshd[6800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:41.582066 systemd-logind[1436]: New session 15 of user core. Sep 12 17:44:41.591831 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:44:41.970763 sshd[6800]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:41.976590 systemd[1]: sshd@17-10.128.0.49:22-139.178.89.65:43994.service: Deactivated successfully. Sep 12 17:44:41.980056 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:44:41.983028 systemd-logind[1436]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:44:41.984960 systemd-logind[1436]: Removed session 15. Sep 12 17:44:47.043942 systemd[1]: Started sshd@18-10.128.0.49:22-139.178.89.65:44004.service - OpenSSH per-connection server daemon (139.178.89.65:44004). Sep 12 17:44:47.409706 sshd[6874]: Accepted publickey for core from 139.178.89.65 port 44004 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:44:47.413047 sshd[6874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:47.426094 systemd-logind[1436]: New session 16 of user core. Sep 12 17:44:47.431766 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:44:47.785435 sshd[6874]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:47.793242 systemd[1]: sshd@18-10.128.0.49:22-139.178.89.65:44004.service: Deactivated successfully. Sep 12 17:44:47.796669 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:44:47.799189 systemd-logind[1436]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:44:47.800881 systemd-logind[1436]: Removed session 16. Sep 12 17:44:52.865772 systemd[1]: Started sshd@19-10.128.0.49:22-139.178.89.65:59638.service - OpenSSH per-connection server daemon (139.178.89.65:59638). Sep 12 17:44:53.258109 sshd[6888]: Accepted publickey for core from 139.178.89.65 port 59638 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:44:53.261390 sshd[6888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:53.269962 systemd-logind[1436]: New session 17 of user core. Sep 12 17:44:53.274761 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:44:53.612234 sshd[6888]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:53.617597 systemd[1]: sshd@19-10.128.0.49:22-139.178.89.65:59638.service: Deactivated successfully. Sep 12 17:44:53.621412 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:44:53.625108 systemd-logind[1436]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:44:53.627204 systemd-logind[1436]: Removed session 17. Sep 12 17:44:53.685444 systemd[1]: Started sshd@20-10.128.0.49:22-139.178.89.65:59646.service - OpenSSH per-connection server daemon (139.178.89.65:59646). Sep 12 17:44:54.072215 sshd[6901]: Accepted publickey for core from 139.178.89.65 port 59646 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:44:54.073120 sshd[6901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:54.079600 systemd-logind[1436]: New session 18 of user core. Sep 12 17:44:54.085803 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:44:54.504044 sshd[6901]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:54.509985 systemd[1]: sshd@20-10.128.0.49:22-139.178.89.65:59646.service: Deactivated successfully. Sep 12 17:44:54.514372 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:44:54.516059 systemd-logind[1436]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:44:54.517801 systemd-logind[1436]: Removed session 18. Sep 12 17:44:54.578197 systemd[1]: Started sshd@21-10.128.0.49:22-139.178.89.65:59656.service - OpenSSH per-connection server daemon (139.178.89.65:59656). Sep 12 17:44:54.965678 sshd[6912]: Accepted publickey for core from 139.178.89.65 port 59656 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:44:54.967648 sshd[6912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:54.973277 systemd-logind[1436]: New session 19 of user core. Sep 12 17:44:54.979722 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:44:57.584955 sshd[6912]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:57.594367 systemd[1]: sshd@21-10.128.0.49:22-139.178.89.65:59656.service: Deactivated successfully. Sep 12 17:44:57.598395 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:44:57.607098 systemd-logind[1436]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:44:57.609187 systemd-logind[1436]: Removed session 19. Sep 12 17:44:57.665080 systemd[1]: Started sshd@22-10.128.0.49:22-139.178.89.65:59660.service - OpenSSH per-connection server daemon (139.178.89.65:59660). Sep 12 17:44:58.068124 sshd[6927]: Accepted publickey for core from 139.178.89.65 port 59660 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:44:58.070685 sshd[6927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:58.078692 systemd-logind[1436]: New session 20 of user core. Sep 12 17:44:58.085733 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:44:58.777196 sshd[6927]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:58.786327 systemd-logind[1436]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:44:58.787120 systemd[1]: sshd@22-10.128.0.49:22-139.178.89.65:59660.service: Deactivated successfully. Sep 12 17:44:58.792750 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:44:58.800965 systemd-logind[1436]: Removed session 20. Sep 12 17:44:58.852961 systemd[1]: Started sshd@23-10.128.0.49:22-139.178.89.65:59664.service - OpenSSH per-connection server daemon (139.178.89.65:59664). Sep 12 17:44:59.262627 sshd[6941]: Accepted publickey for core from 139.178.89.65 port 59664 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:44:59.264276 sshd[6941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:59.278057 systemd-logind[1436]: New session 21 of user core. Sep 12 17:44:59.284749 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:44:59.660037 sshd[6941]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:59.670043 systemd[1]: sshd@23-10.128.0.49:22-139.178.89.65:59664.service: Deactivated successfully. Sep 12 17:44:59.676139 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:44:59.677673 systemd-logind[1436]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:44:59.680643 systemd-logind[1436]: Removed session 21. Sep 12 17:45:04.731167 systemd[1]: Started sshd@24-10.128.0.49:22-139.178.89.65:48760.service - OpenSSH per-connection server daemon (139.178.89.65:48760). Sep 12 17:45:05.118669 sshd[6976]: Accepted publickey for core from 139.178.89.65 port 48760 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:45:05.120546 sshd[6976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:05.127115 systemd-logind[1436]: New session 22 of user core. Sep 12 17:45:05.133771 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:45:05.474212 sshd[6976]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:05.480466 systemd[1]: sshd@24-10.128.0.49:22-139.178.89.65:48760.service: Deactivated successfully. Sep 12 17:45:05.483389 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:45:05.484456 systemd-logind[1436]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:45:05.485975 systemd-logind[1436]: Removed session 22. Sep 12 17:45:10.549982 systemd[1]: Started sshd@25-10.128.0.49:22-139.178.89.65:35492.service - OpenSSH per-connection server daemon (139.178.89.65:35492). Sep 12 17:45:10.952311 sshd[7009]: Accepted publickey for core from 139.178.89.65 port 35492 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:45:10.954234 sshd[7009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:10.963982 systemd-logind[1436]: New session 23 of user core. Sep 12 17:45:10.968793 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:45:11.395926 sshd[7009]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:11.408661 systemd[1]: sshd@25-10.128.0.49:22-139.178.89.65:35492.service: Deactivated successfully. Sep 12 17:45:11.413816 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:45:11.415766 systemd-logind[1436]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:45:11.419393 systemd-logind[1436]: Removed session 23. Sep 12 17:45:14.144922 systemd[1]: run-containerd-runc-k8s.io-31ed6019576fac17d4b17c0a2f623c355fcacff56b85689691b0625b1aef7db7-runc.ejJBvB.mount: Deactivated successfully. Sep 12 17:45:16.465057 systemd[1]: Started sshd@26-10.128.0.49:22-139.178.89.65:35498.service - OpenSSH per-connection server daemon (139.178.89.65:35498). Sep 12 17:45:16.851427 sshd[7083]: Accepted publickey for core from 139.178.89.65 port 35498 ssh2: RSA SHA256:gjw0oebFhoUUtskhzNVjQlm5ZON88aMhVi+M2WRURB8 Sep 12 17:45:16.853703 sshd[7083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:16.860457 systemd-logind[1436]: New session 24 of user core. Sep 12 17:45:16.865770 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:45:17.209456 sshd[7083]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:17.215366 systemd[1]: sshd@26-10.128.0.49:22-139.178.89.65:35498.service: Deactivated successfully. Sep 12 17:45:17.218229 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:45:17.219890 systemd-logind[1436]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:45:17.222034 systemd-logind[1436]: Removed session 24.