Aug 13 07:17:04.132085 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:17:04.132154 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:17:04.132173 kernel: BIOS-provided physical RAM map: Aug 13 07:17:04.132188 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Aug 13 07:17:04.132202 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Aug 13 07:17:04.132216 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Aug 13 07:17:04.132234 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Aug 13 07:17:04.132253 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Aug 13 07:17:04.132269 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Aug 13 07:17:04.132284 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Aug 13 07:17:04.132299 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Aug 13 07:17:04.132315 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Aug 13 07:17:04.132330 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Aug 13 07:17:04.132346 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Aug 13 07:17:04.132369 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Aug 13 07:17:04.132386 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Aug 13 07:17:04.132402 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Aug 13 07:17:04.132419 kernel: NX (Execute Disable) protection: active Aug 13 07:17:04.132436 kernel: APIC: Static calls initialized Aug 13 07:17:04.132452 kernel: efi: EFI v2.7 by EDK II Aug 13 07:17:04.132469 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Aug 13 07:17:04.132486 kernel: SMBIOS 2.4 present. Aug 13 07:17:04.132503 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Aug 13 07:17:04.132520 kernel: Hypervisor detected: KVM Aug 13 07:17:04.132540 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:17:04.132557 kernel: kvm-clock: using sched offset of 13038050282 cycles Aug 13 07:17:04.132575 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:17:04.132592 kernel: tsc: Detected 2299.998 MHz processor Aug 13 07:17:04.132609 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:17:04.132627 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:17:04.132645 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Aug 13 07:17:04.132662 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Aug 13 07:17:04.132679 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:17:04.132700 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Aug 13 07:17:04.132717 kernel: Using GB pages for direct mapping Aug 13 07:17:04.132734 kernel: Secure boot disabled Aug 13 07:17:04.132751 kernel: ACPI: Early table checksum verification disabled Aug 13 07:17:04.132769 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Aug 13 07:17:04.132786 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Aug 13 07:17:04.132804 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Aug 13 07:17:04.132828 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Aug 13 07:17:04.132849 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Aug 13 07:17:04.132867 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Aug 13 07:17:04.132885 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Aug 13 07:17:04.132903 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Aug 13 07:17:04.132922 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Aug 13 07:17:04.132940 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Aug 13 07:17:04.132962 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Aug 13 07:17:04.132980 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Aug 13 07:17:04.132998 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Aug 13 07:17:04.133016 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Aug 13 07:17:04.133034 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Aug 13 07:17:04.133076 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Aug 13 07:17:04.133095 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Aug 13 07:17:04.133113 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Aug 13 07:17:04.133140 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Aug 13 07:17:04.133163 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Aug 13 07:17:04.133181 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:17:04.133199 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:17:04.133216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:17:04.133235 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Aug 13 07:17:04.133253 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Aug 13 07:17:04.133271 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Aug 13 07:17:04.133289 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Aug 13 07:17:04.133307 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Aug 13 07:17:04.133329 kernel: Zone ranges: Aug 13 07:17:04.133347 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:17:04.133365 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 07:17:04.133383 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Aug 13 07:17:04.133401 kernel: Movable zone start for each node Aug 13 07:17:04.133419 kernel: Early memory node ranges Aug 13 07:17:04.133437 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Aug 13 07:17:04.133455 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Aug 13 07:17:04.133473 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Aug 13 07:17:04.133495 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Aug 13 07:17:04.133513 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Aug 13 07:17:04.133531 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Aug 13 07:17:04.133549 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:17:04.133567 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Aug 13 07:17:04.133585 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Aug 13 07:17:04.133603 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 13 07:17:04.133622 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Aug 13 07:17:04.133656 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 13 07:17:04.133679 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:17:04.133697 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:17:04.133714 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:17:04.133732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:17:04.133750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:17:04.133768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:17:04.133786 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:17:04.133804 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:17:04.133823 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 07:17:04.133850 kernel: Booting paravirtualized kernel on KVM Aug 13 07:17:04.133869 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:17:04.133887 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:17:04.133905 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:17:04.133923 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:17:04.133940 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:17:04.133958 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:17:04.133976 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:17:04.133996 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:17:04.134019 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:17:04.134037 kernel: random: crng init done Aug 13 07:17:04.134129 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 07:17:04.134147 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:17:04.134165 kernel: Fallback order for Node 0: 0 Aug 13 07:17:04.134183 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Aug 13 07:17:04.134201 kernel: Policy zone: Normal Aug 13 07:17:04.134219 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:17:04.134242 kernel: software IO TLB: area num 2. Aug 13 07:17:04.134261 kernel: Memory: 7513392K/7860584K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 346932K reserved, 0K cma-reserved) Aug 13 07:17:04.134279 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:17:04.134297 kernel: Kernel/User page tables isolation: enabled Aug 13 07:17:04.134315 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:17:04.134333 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:17:04.134351 kernel: Dynamic Preempt: voluntary Aug 13 07:17:04.134369 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:17:04.134394 kernel: rcu: RCU event tracing is enabled. Aug 13 07:17:04.134431 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:17:04.134450 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:17:04.134469 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:17:04.134492 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:17:04.134511 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:17:04.134530 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:17:04.134549 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 07:17:04.134568 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:17:04.134588 kernel: Console: colour dummy device 80x25 Aug 13 07:17:04.134611 kernel: printk: console [ttyS0] enabled Aug 13 07:17:04.134629 kernel: ACPI: Core revision 20230628 Aug 13 07:17:04.134648 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:17:04.134667 kernel: x2apic enabled Aug 13 07:17:04.134687 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:17:04.134707 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Aug 13 07:17:04.134727 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 07:17:04.134746 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Aug 13 07:17:04.134768 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Aug 13 07:17:04.134788 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Aug 13 07:17:04.134807 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:17:04.134826 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 07:17:04.134845 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 07:17:04.134864 kernel: Spectre V2 : Mitigation: IBRS Aug 13 07:17:04.134883 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:17:04.134902 kernel: RETBleed: Mitigation: IBRS Aug 13 07:17:04.134921 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:17:04.134945 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Aug 13 07:17:04.134964 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:17:04.134983 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:17:04.135003 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:17:04.135022 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:17:04.135054 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:17:04.135084 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:17:04.135103 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:17:04.135129 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:17:04.135154 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:17:04.135173 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:17:04.135192 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:17:04.135211 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:17:04.135230 kernel: landlock: Up and running. Aug 13 07:17:04.135249 kernel: SELinux: Initializing. Aug 13 07:17:04.135268 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:17:04.135288 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:17:04.135307 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Aug 13 07:17:04.135331 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:17:04.135350 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:17:04.135369 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:17:04.135387 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Aug 13 07:17:04.135688 kernel: signal: max sigframe size: 1776 Aug 13 07:17:04.135707 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:17:04.135726 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:17:04.135743 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:17:04.135761 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:17:04.135785 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:17:04.135802 kernel: .... node #0, CPUs: #1 Aug 13 07:17:04.135820 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 13 07:17:04.135838 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 07:17:04.135856 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:17:04.136001 kernel: smpboot: Max logical packages: 1 Aug 13 07:17:04.136020 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Aug 13 07:17:04.136038 kernel: devtmpfs: initialized Aug 13 07:17:04.136093 kernel: x86/mm: Memory block size: 128MB Aug 13 07:17:04.136112 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Aug 13 07:17:04.136139 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:17:04.136278 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:17:04.136297 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:17:04.136315 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:17:04.136335 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:17:04.136353 kernel: audit: type=2000 audit(1755069422.737:1): state=initialized audit_enabled=0 res=1 Aug 13 07:17:04.136486 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:17:04.136509 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:17:04.136528 kernel: cpuidle: using governor menu Aug 13 07:17:04.136546 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:17:04.136565 kernel: dca service started, version 1.12.1 Aug 13 07:17:04.136584 kernel: PCI: Using configuration type 1 for base access Aug 13 07:17:04.136716 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:17:04.136736 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:17:04.136756 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:17:04.136774 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:17:04.136796 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:17:04.136813 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:17:04.136829 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:17:04.136847 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:17:04.136865 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 13 07:17:04.136882 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:17:04.136899 kernel: ACPI: Interpreter enabled Aug 13 07:17:04.136917 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 07:17:04.136935 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:17:04.136958 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:17:04.136976 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 07:17:04.136995 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 13 07:17:04.137149 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:17:04.137521 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:17:04.137728 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 07:17:04.137914 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 07:17:04.137945 kernel: PCI host bridge to bus 0000:00 Aug 13 07:17:04.140807 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:17:04.141026 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:17:04.141237 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:17:04.141401 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Aug 13 07:17:04.141564 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:17:04.141775 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 07:17:04.141983 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Aug 13 07:17:04.142199 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 07:17:04.142386 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 13 07:17:04.142579 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Aug 13 07:17:04.143230 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 13 07:17:04.143594 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Aug 13 07:17:04.143952 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:17:04.144629 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Aug 13 07:17:04.144833 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Aug 13 07:17:04.145036 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:17:04.145250 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Aug 13 07:17:04.145434 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Aug 13 07:17:04.145458 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:17:04.145484 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:17:04.145503 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:17:04.145522 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:17:04.145541 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 07:17:04.145559 kernel: iommu: Default domain type: Translated Aug 13 07:17:04.145578 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:17:04.145597 kernel: efivars: Registered efivars operations Aug 13 07:17:04.145615 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:17:04.145634 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:17:04.145657 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Aug 13 07:17:04.145676 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Aug 13 07:17:04.145693 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Aug 13 07:17:04.145711 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Aug 13 07:17:04.145728 kernel: vgaarb: loaded Aug 13 07:17:04.145746 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:17:04.145764 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:17:04.145781 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:17:04.145801 kernel: pnp: PnP ACPI init Aug 13 07:17:04.145822 kernel: pnp: PnP ACPI: found 7 devices Aug 13 07:17:04.145854 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:17:04.145879 kernel: NET: Registered PF_INET protocol family Aug 13 07:17:04.145918 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:17:04.145936 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 07:17:04.145952 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:17:04.145970 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:17:04.145990 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 07:17:04.146007 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 07:17:04.146029 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:17:04.147002 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:17:04.147029 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:17:04.148678 kernel: NET: Registered PF_XDP protocol family Aug 13 07:17:04.149223 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:17:04.149637 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:17:04.149818 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:17:04.149985 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Aug 13 07:17:04.150252 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:17:04.150285 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:17:04.150307 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 07:17:04.150327 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Aug 13 07:17:04.150346 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:17:04.150364 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 07:17:04.150383 kernel: clocksource: Switched to clocksource tsc Aug 13 07:17:04.150401 kernel: Initialise system trusted keyrings Aug 13 07:17:04.150428 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 07:17:04.150449 kernel: Key type asymmetric registered Aug 13 07:17:04.150468 kernel: Asymmetric key parser 'x509' registered Aug 13 07:17:04.150487 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:17:04.150505 kernel: io scheduler mq-deadline registered Aug 13 07:17:04.150524 kernel: io scheduler kyber registered Aug 13 07:17:04.150543 kernel: io scheduler bfq registered Aug 13 07:17:04.150562 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:17:04.150584 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 07:17:04.150804 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Aug 13 07:17:04.150830 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 13 07:17:04.151036 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Aug 13 07:17:04.152185 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 07:17:04.152428 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Aug 13 07:17:04.152455 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:17:04.152476 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:17:04.152496 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 07:17:04.152514 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Aug 13 07:17:04.152541 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Aug 13 07:17:04.152756 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Aug 13 07:17:04.152784 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:17:04.152803 kernel: i8042: Warning: Keylock active Aug 13 07:17:04.152822 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:17:04.152841 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:17:04.154139 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 13 07:17:04.154370 kernel: rtc_cmos 00:00: registered as rtc0 Aug 13 07:17:04.154549 kernel: rtc_cmos 00:00: setting system clock to 2025-08-13T07:17:03 UTC (1755069423) Aug 13 07:17:04.154725 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 13 07:17:04.154750 kernel: intel_pstate: CPU model not supported Aug 13 07:17:04.154771 kernel: pstore: Using crash dump compression: deflate Aug 13 07:17:04.154791 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:17:04.154811 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:17:04.154831 kernel: Segment Routing with IPv6 Aug 13 07:17:04.154854 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:17:04.154873 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:17:04.154892 kernel: Key type dns_resolver registered Aug 13 07:17:04.154937 kernel: IPI shorthand broadcast: enabled Aug 13 07:17:04.154957 kernel: sched_clock: Marking stable (895005807, 171408081)->(1132864532, -66450644) Aug 13 07:17:04.154977 kernel: registered taskstats version 1 Aug 13 07:17:04.154996 kernel: Loading compiled-in X.509 certificates Aug 13 07:17:04.155016 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:17:04.155035 kernel: Key type .fscrypt registered Aug 13 07:17:04.156113 kernel: Key type fscrypt-provisioning registered Aug 13 07:17:04.156174 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:17:04.156196 kernel: ima: No architecture policies found Aug 13 07:17:04.156216 kernel: clk: Disabling unused clocks Aug 13 07:17:04.156234 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:17:04.156252 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:17:04.156271 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:17:04.156289 kernel: Run /init as init process Aug 13 07:17:04.156306 kernel: with arguments: Aug 13 07:17:04.156329 kernel: /init Aug 13 07:17:04.156346 kernel: with environment: Aug 13 07:17:04.156363 kernel: HOME=/ Aug 13 07:17:04.156380 kernel: TERM=linux Aug 13 07:17:04.156399 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:17:04.156416 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:17:04.156439 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:17:04.156461 systemd[1]: Detected virtualization google. Aug 13 07:17:04.156485 systemd[1]: Detected architecture x86-64. Aug 13 07:17:04.156503 systemd[1]: Running in initrd. Aug 13 07:17:04.156523 systemd[1]: No hostname configured, using default hostname. Aug 13 07:17:04.156541 systemd[1]: Hostname set to . Aug 13 07:17:04.156560 systemd[1]: Initializing machine ID from random generator. Aug 13 07:17:04.156578 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:17:04.156596 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:17:04.156616 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:17:04.156644 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:17:04.156665 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:17:04.156686 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:17:04.156706 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:17:04.156728 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:17:04.156747 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:17:04.156770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:17:04.156789 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:17:04.156807 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:17:04.156846 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:17:04.156869 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:17:04.156887 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:17:04.156907 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:17:04.156930 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:17:04.156949 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:17:04.156969 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:17:04.156988 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:17:04.157008 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:17:04.157027 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:17:04.157076 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:17:04.157096 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:17:04.157127 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:17:04.157147 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:17:04.157167 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:17:04.157186 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:17:04.157206 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:17:04.157226 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:04.157288 systemd-journald[183]: Collecting audit messages is disabled. Aug 13 07:17:04.157335 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:17:04.157356 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:17:04.157376 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:17:04.157402 systemd-journald[183]: Journal started Aug 13 07:17:04.157442 systemd-journald[183]: Runtime Journal (/run/log/journal/1416bb73631a45518f815b2b2a6b8c46) is 8.0M, max 148.7M, 140.7M free. Aug 13 07:17:04.161551 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:17:04.143295 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 07:17:04.170080 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:17:04.190409 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:17:04.199647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:04.218704 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:17:04.218750 kernel: Bridge firewalling registered Aug 13 07:17:04.205101 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 07:17:04.206646 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:17:04.212063 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:17:04.226391 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:17:04.237438 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:17:04.246306 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:17:04.247543 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:17:04.259501 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:17:04.269170 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:17:04.283588 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:17:04.293668 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:04.310389 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:17:04.332365 systemd-resolved[209]: Positive Trust Anchors: Aug 13 07:17:04.332954 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:17:04.333026 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:17:04.340023 systemd-resolved[209]: Defaulting to hostname 'linux'. Aug 13 07:17:04.359272 dracut-cmdline[217]: dracut-dracut-053 Aug 13 07:17:04.359272 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:17:04.341831 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:17:04.367339 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:17:04.453092 kernel: SCSI subsystem initialized Aug 13 07:17:04.465100 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:17:04.478091 kernel: iscsi: registered transport (tcp) Aug 13 07:17:04.504262 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:17:04.504352 kernel: QLogic iSCSI HBA Driver Aug 13 07:17:04.561090 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:17:04.566488 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:17:04.611850 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:17:04.611948 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:17:04.611980 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:17:04.663105 kernel: raid6: avx2x4 gen() 16787 MB/s Aug 13 07:17:04.681139 kernel: raid6: avx2x2 gen() 16587 MB/s Aug 13 07:17:04.699132 kernel: raid6: avx2x1 gen() 13252 MB/s Aug 13 07:17:04.699223 kernel: raid6: using algorithm avx2x4 gen() 16787 MB/s Aug 13 07:17:04.716766 kernel: raid6: .... xor() 7354 MB/s, rmw enabled Aug 13 07:17:04.716875 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:17:04.741097 kernel: xor: automatically using best checksumming function avx Aug 13 07:17:04.919094 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:17:04.932755 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:17:04.942367 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:17:04.960349 systemd-udevd[399]: Using default interface naming scheme 'v255'. Aug 13 07:17:04.968597 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:17:04.995348 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:17:05.046346 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Aug 13 07:17:05.086200 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:17:05.090463 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:17:05.221454 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:17:05.238425 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:17:05.300852 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:17:05.322633 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:17:05.343338 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:17:05.355202 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:17:05.410234 kernel: scsi host0: Virtio SCSI HBA Aug 13 07:17:05.370395 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:17:05.409393 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:17:05.502216 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Aug 13 07:17:05.502314 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:17:05.502344 kernel: AES CTR mode by8 optimization enabled Aug 13 07:17:05.485222 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:17:05.485437 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:05.635301 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Aug 13 07:17:05.635697 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Aug 13 07:17:05.635956 kernel: sd 0:0:1:0: [sda] Write Protect is off Aug 13 07:17:05.636228 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Aug 13 07:17:05.636457 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 07:17:05.636693 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:17:05.636719 kernel: GPT:17805311 != 25165823 Aug 13 07:17:05.636750 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:17:05.636773 kernel: GPT:17805311 != 25165823 Aug 13 07:17:05.636796 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:17:05.636819 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:05.636844 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Aug 13 07:17:05.524388 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:17:05.535173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:17:05.535446 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:05.547238 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:05.631981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:05.713076 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (450) Aug 13 07:17:05.720300 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Aug 13 07:17:05.743090 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (458) Aug 13 07:17:05.753737 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:17:05.764785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:05.814273 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Aug 13 07:17:05.822489 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 13 07:17:05.840795 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Aug 13 07:17:05.857560 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Aug 13 07:17:05.886667 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:17:05.933085 disk-uuid[539]: Primary Header is updated. Aug 13 07:17:05.933085 disk-uuid[539]: Secondary Entries is updated. Aug 13 07:17:05.933085 disk-uuid[539]: Secondary Header is updated. Aug 13 07:17:05.970287 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:05.935412 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:17:05.994200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:06.017106 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:06.022121 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:07.017239 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:07.017350 disk-uuid[541]: The operation has completed successfully. Aug 13 07:17:07.115397 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:17:07.115582 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:17:07.134410 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:17:07.163746 sh[566]: Success Aug 13 07:17:07.190199 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:17:07.313379 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:17:07.321243 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:17:07.365368 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:17:07.398130 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:17:07.398263 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:17:07.415929 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:17:07.416074 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:17:07.429107 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:17:07.467183 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 07:17:07.476650 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:17:07.486450 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:17:07.491403 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:17:07.528320 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:17:07.577408 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:17:07.577520 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:17:07.577547 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:17:07.596312 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:17:07.596436 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:17:07.624334 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:17:07.623657 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:17:07.634256 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:17:07.659472 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:17:07.707065 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:17:07.738448 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:17:07.829086 systemd-networkd[749]: lo: Link UP Aug 13 07:17:07.829096 systemd-networkd[749]: lo: Gained carrier Aug 13 07:17:07.836514 systemd-networkd[749]: Enumeration completed Aug 13 07:17:07.836845 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:17:07.837747 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:17:07.837757 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:17:07.839941 systemd-networkd[749]: eth0: Link UP Aug 13 07:17:07.913179 ignition[704]: Ignition 2.19.0 Aug 13 07:17:07.839951 systemd-networkd[749]: eth0: Gained carrier Aug 13 07:17:07.913191 ignition[704]: Stage: fetch-offline Aug 13 07:17:07.839970 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:17:07.913273 ignition[704]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:07.858477 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.37/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 13 07:17:07.913290 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:17:07.903466 systemd[1]: Reached target network.target - Network. Aug 13 07:17:07.913575 ignition[704]: parsed url from cmdline: "" Aug 13 07:17:07.915784 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:17:07.913582 ignition[704]: no config URL provided Aug 13 07:17:07.945369 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:17:07.913593 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:17:07.979805 unknown[759]: fetched base config from "system" Aug 13 07:17:07.913607 ignition[704]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:17:07.979819 unknown[759]: fetched base config from "system" Aug 13 07:17:07.913618 ignition[704]: failed to fetch config: resource requires networking Aug 13 07:17:07.979830 unknown[759]: fetched user config from "gcp" Aug 13 07:17:07.913927 ignition[704]: Ignition finished successfully Aug 13 07:17:07.982381 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:17:07.966038 ignition[759]: Ignition 2.19.0 Aug 13 07:17:08.011353 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:17:07.966061 ignition[759]: Stage: fetch Aug 13 07:17:08.046751 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:17:07.966287 ignition[759]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:08.070335 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:17:07.966300 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:17:08.110863 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:17:07.966420 ignition[759]: parsed url from cmdline: "" Aug 13 07:17:08.127366 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:17:07.966427 ignition[759]: no config URL provided Aug 13 07:17:08.136520 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:17:07.966437 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:17:08.152514 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:17:07.966450 ignition[759]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:17:08.186476 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:17:07.966474 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Aug 13 07:17:08.203409 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:17:07.971367 ignition[759]: GET result: OK Aug 13 07:17:08.228280 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:17:07.971475 ignition[759]: parsing config with SHA512: 54ba009c52c5e3289e8fd49ac8697340656b783906f39f93ec17438e4ec3dbc64775ad81f4db47b8d501662021f7f66f2d9029543a5070ef8793d04109d16cd2 Aug 13 07:17:07.980267 ignition[759]: fetch: fetch complete Aug 13 07:17:07.980274 ignition[759]: fetch: fetch passed Aug 13 07:17:07.980331 ignition[759]: Ignition finished successfully Aug 13 07:17:08.033768 ignition[766]: Ignition 2.19.0 Aug 13 07:17:08.033778 ignition[766]: Stage: kargs Aug 13 07:17:08.034067 ignition[766]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:08.034087 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:17:08.035093 ignition[766]: kargs: kargs passed Aug 13 07:17:08.035151 ignition[766]: Ignition finished successfully Aug 13 07:17:08.107982 ignition[771]: Ignition 2.19.0 Aug 13 07:17:08.107991 ignition[771]: Stage: disks Aug 13 07:17:08.108242 ignition[771]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:08.108256 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:17:08.109357 ignition[771]: disks: disks passed Aug 13 07:17:08.109546 ignition[771]: Ignition finished successfully Aug 13 07:17:08.294949 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 13 07:17:08.474245 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:17:08.494355 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:17:08.641086 kernel: EXT4-fs (sda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:17:08.641760 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:17:08.642675 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:17:08.666293 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:17:08.691594 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:17:08.700815 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:17:08.765269 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (788) Aug 13 07:17:08.765326 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:17:08.765355 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:17:08.765393 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:17:08.765434 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:17:08.765459 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:17:08.700902 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:17:08.700954 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:17:08.778456 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:17:08.802620 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:17:08.818353 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:17:08.893236 systemd-networkd[749]: eth0: Gained IPv6LL Aug 13 07:17:08.959687 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:17:08.972687 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:17:08.984230 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:17:08.994239 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:17:09.146545 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:17:09.152241 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:17:09.180301 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:17:09.200130 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:17:09.210541 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:17:09.260608 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:17:09.269263 ignition[900]: INFO : Ignition 2.19.0 Aug 13 07:17:09.269263 ignition[900]: INFO : Stage: mount Aug 13 07:17:09.269263 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:09.269263 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:17:09.269263 ignition[900]: INFO : mount: mount passed Aug 13 07:17:09.269263 ignition[900]: INFO : Ignition finished successfully Aug 13 07:17:09.279693 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:17:09.302224 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:17:09.368342 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:17:09.393088 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (912) Aug 13 07:17:09.410856 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:17:09.410964 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:17:09.411009 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:17:09.433822 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:17:09.433924 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:17:09.437661 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:17:09.479991 ignition[929]: INFO : Ignition 2.19.0 Aug 13 07:17:09.479991 ignition[929]: INFO : Stage: files Aug 13 07:17:09.495247 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:09.495247 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:17:09.495247 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:17:09.495247 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:17:09.495247 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:17:09.495247 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:17:09.495247 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:17:09.495247 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:17:09.491734 unknown[929]: wrote ssh authorized keys file for user: core Aug 13 07:17:09.597237 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:17:09.597237 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 07:17:09.631300 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:17:09.972739 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 07:17:10.439817 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:17:10.794569 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:17:10.794569 ignition[929]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 07:17:10.813492 ignition[929]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:17:10.813492 ignition[929]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:17:10.813492 ignition[929]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 07:17:10.813492 ignition[929]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:17:10.813492 ignition[929]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:17:10.813492 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:17:10.813492 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:17:10.813492 ignition[929]: INFO : files: files passed Aug 13 07:17:10.813492 ignition[929]: INFO : Ignition finished successfully Aug 13 07:17:10.799289 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:17:10.839353 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:17:10.874314 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:17:10.888892 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:17:11.053293 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:17:11.053293 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:17:10.889018 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:17:11.102291 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:17:10.949655 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:17:10.976626 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:17:10.999323 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:17:11.070742 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:17:11.070867 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:17:11.093217 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:17:11.112470 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:17:11.137524 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:17:11.143490 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:17:11.193649 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:17:11.221454 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:17:11.257651 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:17:11.271403 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:17:11.294461 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:17:11.306532 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:17:11.306618 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:17:11.370328 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:17:11.381469 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:17:11.398503 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:17:11.414529 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:17:11.453268 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:17:11.453560 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:17:11.472534 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:17:11.510266 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:17:11.510539 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:17:11.537253 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:17:11.547454 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:17:11.547541 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:17:11.578524 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:17:11.588483 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:17:11.606480 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:17:11.606582 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:17:11.626481 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:17:11.626568 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:17:11.665584 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:17:11.665669 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:17:11.674532 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:17:11.674612 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:17:11.742303 ignition[982]: INFO : Ignition 2.19.0 Aug 13 07:17:11.742303 ignition[982]: INFO : Stage: umount Aug 13 07:17:11.742303 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:11.742303 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:17:11.742303 ignition[982]: INFO : umount: umount passed Aug 13 07:17:11.742303 ignition[982]: INFO : Ignition finished successfully Aug 13 07:17:11.701213 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:17:11.756237 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:17:11.767298 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:17:11.767436 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:17:11.788398 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:17:11.788504 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:17:11.840090 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:17:11.841118 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:17:11.841294 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:17:11.860774 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:17:11.860897 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:17:11.879786 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:17:11.879905 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:17:11.901925 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:17:11.902097 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:17:11.919502 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:17:11.919581 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:17:11.929542 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:17:11.929612 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:17:11.946621 systemd[1]: Stopped target network.target - Network. Aug 13 07:17:11.962459 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:17:11.962554 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:17:11.979600 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:17:12.000468 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:17:12.004186 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:17:12.027265 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:17:12.043289 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:17:12.052544 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:17:12.052612 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:17:12.068566 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:17:12.068645 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:17:12.085551 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:17:12.085635 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:17:12.103569 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:17:12.103646 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:17:12.140509 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:17:12.140594 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:17:12.167792 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:17:12.173172 systemd-networkd[749]: eth0: DHCPv6 lease lost Aug 13 07:17:12.186482 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:17:12.206911 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:17:12.207125 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:17:12.223187 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:17:12.223425 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:17:12.241241 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:17:12.241298 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:17:12.267211 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:17:12.295239 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:17:12.295391 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:17:12.315377 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:17:12.315467 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:17:12.334365 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:17:12.334476 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:17:12.352373 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:17:12.352467 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:17:12.373630 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:17:12.387582 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:17:12.387805 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:17:12.810400 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 07:17:12.421844 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:17:12.422008 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:17:12.443182 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:17:12.443282 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:17:12.461353 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:17:12.461424 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:17:12.481340 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:17:12.481463 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:17:12.510262 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:17:12.510391 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:17:12.539255 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:17:12.539391 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:12.574323 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:17:12.586372 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:17:12.586454 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:17:12.615444 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:17:12.615524 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:17:12.637467 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:17:12.637547 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:17:12.658468 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:17:12.658550 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:12.671134 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:17:12.671263 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:17:12.689144 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:17:12.712343 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:17:12.757598 systemd[1]: Switching root. Aug 13 07:17:13.084356 systemd-journald[183]: Journal stopped Aug 13 07:17:04.132085 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:17:04.132154 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:17:04.132173 kernel: BIOS-provided physical RAM map: Aug 13 07:17:04.132188 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Aug 13 07:17:04.132202 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Aug 13 07:17:04.132216 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Aug 13 07:17:04.132234 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Aug 13 07:17:04.132253 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Aug 13 07:17:04.132269 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Aug 13 07:17:04.132284 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Aug 13 07:17:04.132299 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Aug 13 07:17:04.132315 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Aug 13 07:17:04.132330 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Aug 13 07:17:04.132346 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Aug 13 07:17:04.132369 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Aug 13 07:17:04.132386 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Aug 13 07:17:04.132402 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Aug 13 07:17:04.132419 kernel: NX (Execute Disable) protection: active Aug 13 07:17:04.132436 kernel: APIC: Static calls initialized Aug 13 07:17:04.132452 kernel: efi: EFI v2.7 by EDK II Aug 13 07:17:04.132469 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Aug 13 07:17:04.132486 kernel: SMBIOS 2.4 present. Aug 13 07:17:04.132503 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Aug 13 07:17:04.132520 kernel: Hypervisor detected: KVM Aug 13 07:17:04.132540 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:17:04.132557 kernel: kvm-clock: using sched offset of 13038050282 cycles Aug 13 07:17:04.132575 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:17:04.132592 kernel: tsc: Detected 2299.998 MHz processor Aug 13 07:17:04.132609 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:17:04.132627 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:17:04.132645 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Aug 13 07:17:04.132662 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Aug 13 07:17:04.132679 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:17:04.132700 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Aug 13 07:17:04.132717 kernel: Using GB pages for direct mapping Aug 13 07:17:04.132734 kernel: Secure boot disabled Aug 13 07:17:04.132751 kernel: ACPI: Early table checksum verification disabled Aug 13 07:17:04.132769 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Aug 13 07:17:04.132786 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Aug 13 07:17:04.132804 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Aug 13 07:17:04.132828 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Aug 13 07:17:04.132849 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Aug 13 07:17:04.132867 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Aug 13 07:17:04.132885 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Aug 13 07:17:04.132903 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Aug 13 07:17:04.132922 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Aug 13 07:17:04.132940 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Aug 13 07:17:04.132962 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Aug 13 07:17:04.132980 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Aug 13 07:17:04.132998 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Aug 13 07:17:04.133016 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Aug 13 07:17:04.133034 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Aug 13 07:17:04.133076 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Aug 13 07:17:04.133095 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Aug 13 07:17:04.133113 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Aug 13 07:17:04.133140 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Aug 13 07:17:04.133163 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Aug 13 07:17:04.133181 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:17:04.133199 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:17:04.133216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:17:04.133235 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Aug 13 07:17:04.133253 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Aug 13 07:17:04.133271 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Aug 13 07:17:04.133289 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Aug 13 07:17:04.133307 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Aug 13 07:17:04.133329 kernel: Zone ranges: Aug 13 07:17:04.133347 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:17:04.133365 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 07:17:04.133383 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Aug 13 07:17:04.133401 kernel: Movable zone start for each node Aug 13 07:17:04.133419 kernel: Early memory node ranges Aug 13 07:17:04.133437 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Aug 13 07:17:04.133455 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Aug 13 07:17:04.133473 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Aug 13 07:17:04.133495 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Aug 13 07:17:04.133513 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Aug 13 07:17:04.133531 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Aug 13 07:17:04.133549 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:17:04.133567 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Aug 13 07:17:04.133585 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Aug 13 07:17:04.133603 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 13 07:17:04.133622 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Aug 13 07:17:04.133656 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 13 07:17:04.133679 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:17:04.133697 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:17:04.133714 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:17:04.133732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:17:04.133750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:17:04.133768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:17:04.133786 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:17:04.133804 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:17:04.133823 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 07:17:04.133850 kernel: Booting paravirtualized kernel on KVM Aug 13 07:17:04.133869 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:17:04.133887 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:17:04.133905 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:17:04.133923 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:17:04.133940 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:17:04.133958 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:17:04.133976 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:17:04.133996 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:17:04.134019 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:17:04.134037 kernel: random: crng init done Aug 13 07:17:04.134129 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 07:17:04.134147 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:17:04.134165 kernel: Fallback order for Node 0: 0 Aug 13 07:17:04.134183 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Aug 13 07:17:04.134201 kernel: Policy zone: Normal Aug 13 07:17:04.134219 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:17:04.134242 kernel: software IO TLB: area num 2. Aug 13 07:17:04.134261 kernel: Memory: 7513392K/7860584K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 346932K reserved, 0K cma-reserved) Aug 13 07:17:04.134279 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:17:04.134297 kernel: Kernel/User page tables isolation: enabled Aug 13 07:17:04.134315 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:17:04.134333 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:17:04.134351 kernel: Dynamic Preempt: voluntary Aug 13 07:17:04.134369 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:17:04.134394 kernel: rcu: RCU event tracing is enabled. Aug 13 07:17:04.134431 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:17:04.134450 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:17:04.134469 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:17:04.134492 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:17:04.134511 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:17:04.134530 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:17:04.134549 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 07:17:04.134568 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:17:04.134588 kernel: Console: colour dummy device 80x25 Aug 13 07:17:04.134611 kernel: printk: console [ttyS0] enabled Aug 13 07:17:04.134629 kernel: ACPI: Core revision 20230628 Aug 13 07:17:04.134648 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:17:04.134667 kernel: x2apic enabled Aug 13 07:17:04.134687 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:17:04.134707 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Aug 13 07:17:04.134727 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 07:17:04.134746 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Aug 13 07:17:04.134768 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Aug 13 07:17:04.134788 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Aug 13 07:17:04.134807 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:17:04.134826 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 07:17:04.134845 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 07:17:04.134864 kernel: Spectre V2 : Mitigation: IBRS Aug 13 07:17:04.134883 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:17:04.134902 kernel: RETBleed: Mitigation: IBRS Aug 13 07:17:04.134921 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:17:04.134945 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Aug 13 07:17:04.134964 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:17:04.134983 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:17:04.135003 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:17:04.135022 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:17:04.135054 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:17:04.135084 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:17:04.135103 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:17:04.135129 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:17:04.135154 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:17:04.135173 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:17:04.135192 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:17:04.135211 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:17:04.135230 kernel: landlock: Up and running. Aug 13 07:17:04.135249 kernel: SELinux: Initializing. Aug 13 07:17:04.135268 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:17:04.135288 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:17:04.135307 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Aug 13 07:17:04.135331 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:17:04.135350 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:17:04.135369 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:17:04.135387 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Aug 13 07:17:04.135688 kernel: signal: max sigframe size: 1776 Aug 13 07:17:04.135707 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:17:04.135726 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:17:04.135743 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:17:04.135761 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:17:04.135785 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:17:04.135802 kernel: .... node #0, CPUs: #1 Aug 13 07:17:04.135820 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 13 07:17:04.135838 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 07:17:04.135856 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:17:04.136001 kernel: smpboot: Max logical packages: 1 Aug 13 07:17:04.136020 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Aug 13 07:17:04.136038 kernel: devtmpfs: initialized Aug 13 07:17:04.136093 kernel: x86/mm: Memory block size: 128MB Aug 13 07:17:04.136112 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Aug 13 07:17:04.136139 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:17:04.136278 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:17:04.136297 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:17:04.136315 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:17:04.136335 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:17:04.136353 kernel: audit: type=2000 audit(1755069422.737:1): state=initialized audit_enabled=0 res=1 Aug 13 07:17:04.136486 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:17:04.136509 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:17:04.136528 kernel: cpuidle: using governor menu Aug 13 07:17:04.136546 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:17:04.136565 kernel: dca service started, version 1.12.1 Aug 13 07:17:04.136584 kernel: PCI: Using configuration type 1 for base access Aug 13 07:17:04.136716 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:17:04.136736 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:17:04.136756 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:17:04.136774 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:17:04.136796 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:17:04.136813 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:17:04.136829 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:17:04.136847 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:17:04.136865 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 13 07:17:04.136882 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:17:04.136899 kernel: ACPI: Interpreter enabled Aug 13 07:17:04.136917 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 07:17:04.136935 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:17:04.136958 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:17:04.136976 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 07:17:04.136995 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 13 07:17:04.137149 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:17:04.137521 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:17:04.137728 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 07:17:04.137914 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 07:17:04.137945 kernel: PCI host bridge to bus 0000:00 Aug 13 07:17:04.140807 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:17:04.141026 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:17:04.141237 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:17:04.141401 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Aug 13 07:17:04.141564 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:17:04.141775 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 07:17:04.141983 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Aug 13 07:17:04.142199 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 07:17:04.142386 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 13 07:17:04.142579 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Aug 13 07:17:04.143230 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 13 07:17:04.143594 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Aug 13 07:17:04.143952 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:17:04.144629 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Aug 13 07:17:04.144833 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Aug 13 07:17:04.145036 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:17:04.145250 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Aug 13 07:17:04.145434 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Aug 13 07:17:04.145458 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:17:04.145484 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:17:04.145503 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:17:04.145522 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:17:04.145541 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 07:17:04.145559 kernel: iommu: Default domain type: Translated Aug 13 07:17:04.145578 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:17:04.145597 kernel: efivars: Registered efivars operations Aug 13 07:17:04.145615 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:17:04.145634 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:17:04.145657 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Aug 13 07:17:04.145676 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Aug 13 07:17:04.145693 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Aug 13 07:17:04.145711 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Aug 13 07:17:04.145728 kernel: vgaarb: loaded Aug 13 07:17:04.145746 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:17:04.145764 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:17:04.145781 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:17:04.145801 kernel: pnp: PnP ACPI init Aug 13 07:17:04.145822 kernel: pnp: PnP ACPI: found 7 devices Aug 13 07:17:04.145854 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:17:04.145879 kernel: NET: Registered PF_INET protocol family Aug 13 07:17:04.145918 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:17:04.145936 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 07:17:04.145952 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:17:04.145970 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:17:04.145990 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 07:17:04.146007 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 07:17:04.146029 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:17:04.147002 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:17:04.147029 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:17:04.148678 kernel: NET: Registered PF_XDP protocol family Aug 13 07:17:04.149223 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:17:04.149637 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:17:04.149818 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:17:04.149985 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Aug 13 07:17:04.150252 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:17:04.150285 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:17:04.150307 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 07:17:04.150327 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Aug 13 07:17:04.150346 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:17:04.150364 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 07:17:04.150383 kernel: clocksource: Switched to clocksource tsc Aug 13 07:17:04.150401 kernel: Initialise system trusted keyrings Aug 13 07:17:04.150428 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 07:17:04.150449 kernel: Key type asymmetric registered Aug 13 07:17:04.150468 kernel: Asymmetric key parser 'x509' registered Aug 13 07:17:04.150487 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:17:04.150505 kernel: io scheduler mq-deadline registered Aug 13 07:17:04.150524 kernel: io scheduler kyber registered Aug 13 07:17:04.150543 kernel: io scheduler bfq registered Aug 13 07:17:04.150562 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:17:04.150584 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 07:17:04.150804 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Aug 13 07:17:04.150830 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 13 07:17:04.151036 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Aug 13 07:17:04.152185 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 07:17:04.152428 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Aug 13 07:17:04.152455 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:17:04.152476 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:17:04.152496 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 07:17:04.152514 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Aug 13 07:17:04.152541 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Aug 13 07:17:04.152756 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Aug 13 07:17:04.152784 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:17:04.152803 kernel: i8042: Warning: Keylock active Aug 13 07:17:04.152822 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:17:04.152841 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:17:04.154139 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 13 07:17:04.154370 kernel: rtc_cmos 00:00: registered as rtc0 Aug 13 07:17:04.154549 kernel: rtc_cmos 00:00: setting system clock to 2025-08-13T07:17:03 UTC (1755069423) Aug 13 07:17:04.154725 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 13 07:17:04.154750 kernel: intel_pstate: CPU model not supported Aug 13 07:17:04.154771 kernel: pstore: Using crash dump compression: deflate Aug 13 07:17:04.154791 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:17:04.154811 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:17:04.154831 kernel: Segment Routing with IPv6 Aug 13 07:17:04.154854 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:17:04.154873 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:17:04.154892 kernel: Key type dns_resolver registered Aug 13 07:17:04.154937 kernel: IPI shorthand broadcast: enabled Aug 13 07:17:04.154957 kernel: sched_clock: Marking stable (895005807, 171408081)->(1132864532, -66450644) Aug 13 07:17:04.154977 kernel: registered taskstats version 1 Aug 13 07:17:04.154996 kernel: Loading compiled-in X.509 certificates Aug 13 07:17:04.155016 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:17:04.155035 kernel: Key type .fscrypt registered Aug 13 07:17:04.156113 kernel: Key type fscrypt-provisioning registered Aug 13 07:17:04.156174 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:17:04.156196 kernel: ima: No architecture policies found Aug 13 07:17:04.156216 kernel: clk: Disabling unused clocks Aug 13 07:17:04.156234 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:17:04.156252 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:17:04.156271 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:17:04.156289 kernel: Run /init as init process Aug 13 07:17:04.156306 kernel: with arguments: Aug 13 07:17:04.156329 kernel: /init Aug 13 07:17:04.156346 kernel: with environment: Aug 13 07:17:04.156363 kernel: HOME=/ Aug 13 07:17:04.156380 kernel: TERM=linux Aug 13 07:17:04.156399 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:17:04.156416 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:17:04.156439 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:17:04.156461 systemd[1]: Detected virtualization google. Aug 13 07:17:04.156485 systemd[1]: Detected architecture x86-64. Aug 13 07:17:04.156503 systemd[1]: Running in initrd. Aug 13 07:17:04.156523 systemd[1]: No hostname configured, using default hostname. Aug 13 07:17:04.156541 systemd[1]: Hostname set to . Aug 13 07:17:04.156560 systemd[1]: Initializing machine ID from random generator. Aug 13 07:17:04.156578 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:17:04.156596 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:17:04.156616 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:17:04.156644 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:17:04.156665 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:17:04.156686 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:17:04.156706 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:17:04.156728 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:17:04.156747 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:17:04.156770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:17:04.156789 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:17:04.156807 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:17:04.156846 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:17:04.156869 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:17:04.156887 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:17:04.156907 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:17:04.156930 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:17:04.156949 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:17:04.156969 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:17:04.156988 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:17:04.157008 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:17:04.157027 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:17:04.157076 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:17:04.157096 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:17:04.157127 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:17:04.157147 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:17:04.157167 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:17:04.157186 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:17:04.157206 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:17:04.157226 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:04.157288 systemd-journald[183]: Collecting audit messages is disabled. Aug 13 07:17:04.157335 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:17:04.157356 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:17:04.157376 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:17:04.157402 systemd-journald[183]: Journal started Aug 13 07:17:04.157442 systemd-journald[183]: Runtime Journal (/run/log/journal/1416bb73631a45518f815b2b2a6b8c46) is 8.0M, max 148.7M, 140.7M free. Aug 13 07:17:04.161551 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:17:04.143295 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 07:17:04.170080 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:17:04.190409 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:17:04.199647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:04.218704 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:17:04.218750 kernel: Bridge firewalling registered Aug 13 07:17:04.205101 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 07:17:04.206646 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:17:04.212063 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:17:04.226391 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:17:04.237438 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:17:04.246306 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:17:04.247543 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:17:04.259501 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:17:04.269170 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:17:04.283588 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:17:04.293668 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:04.310389 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:17:04.332365 systemd-resolved[209]: Positive Trust Anchors: Aug 13 07:17:04.332954 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:17:04.333026 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:17:04.340023 systemd-resolved[209]: Defaulting to hostname 'linux'. Aug 13 07:17:04.359272 dracut-cmdline[217]: dracut-dracut-053 Aug 13 07:17:04.359272 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:17:04.341831 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:17:04.367339 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:17:04.453092 kernel: SCSI subsystem initialized Aug 13 07:17:04.465100 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:17:04.478091 kernel: iscsi: registered transport (tcp) Aug 13 07:17:04.504262 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:17:04.504352 kernel: QLogic iSCSI HBA Driver Aug 13 07:17:04.561090 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:17:04.566488 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:17:04.611850 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:17:04.611948 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:17:04.611980 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:17:04.663105 kernel: raid6: avx2x4 gen() 16787 MB/s Aug 13 07:17:04.681139 kernel: raid6: avx2x2 gen() 16587 MB/s Aug 13 07:17:04.699132 kernel: raid6: avx2x1 gen() 13252 MB/s Aug 13 07:17:04.699223 kernel: raid6: using algorithm avx2x4 gen() 16787 MB/s Aug 13 07:17:04.716766 kernel: raid6: .... xor() 7354 MB/s, rmw enabled Aug 13 07:17:04.716875 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:17:04.741097 kernel: xor: automatically using best checksumming function avx Aug 13 07:17:04.919094 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:17:04.932755 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:17:04.942367 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:17:04.960349 systemd-udevd[399]: Using default interface naming scheme 'v255'. Aug 13 07:17:04.968597 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:17:04.995348 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:17:05.046346 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Aug 13 07:17:05.086200 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:17:05.090463 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:17:05.221454 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:17:05.238425 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:17:05.300852 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:17:05.322633 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:17:05.343338 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:17:05.355202 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:17:05.410234 kernel: scsi host0: Virtio SCSI HBA Aug 13 07:17:05.370395 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:17:05.409393 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:17:05.502216 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Aug 13 07:17:05.502314 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:17:05.502344 kernel: AES CTR mode by8 optimization enabled Aug 13 07:17:05.485222 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:17:05.485437 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:05.635301 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Aug 13 07:17:05.635697 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Aug 13 07:17:05.635956 kernel: sd 0:0:1:0: [sda] Write Protect is off Aug 13 07:17:05.636228 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Aug 13 07:17:05.636457 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 07:17:05.636693 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:17:05.636719 kernel: GPT:17805311 != 25165823 Aug 13 07:17:05.636750 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:17:05.636773 kernel: GPT:17805311 != 25165823 Aug 13 07:17:05.636796 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:17:05.636819 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:05.636844 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Aug 13 07:17:05.524388 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:17:05.535173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:17:05.535446 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:05.547238 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:05.631981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:05.713076 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (450) Aug 13 07:17:05.720300 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Aug 13 07:17:05.743090 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (458) Aug 13 07:17:05.753737 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:17:05.764785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:05.814273 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Aug 13 07:17:05.822489 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 13 07:17:05.840795 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Aug 13 07:17:05.857560 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Aug 13 07:17:05.886667 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:17:05.933085 disk-uuid[539]: Primary Header is updated. Aug 13 07:17:05.933085 disk-uuid[539]: Secondary Entries is updated. Aug 13 07:17:05.933085 disk-uuid[539]: Secondary Header is updated. Aug 13 07:17:05.970287 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:05.935412 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:17:05.994200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:06.017106 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:06.022121 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:07.017239 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:17:07.017350 disk-uuid[541]: The operation has completed successfully. Aug 13 07:17:07.115397 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:17:07.115582 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:17:07.134410 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:17:07.163746 sh[566]: Success Aug 13 07:17:07.190199 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:17:07.313379 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:17:07.321243 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:17:07.365368 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:17:07.398130 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:17:07.398263 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:17:07.415929 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:17:07.416074 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:17:07.429107 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:17:07.467183 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 07:17:07.476650 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:17:07.486450 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:17:07.491403 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:17:07.528320 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:17:07.577408 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:17:07.577520 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:17:07.577547 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:17:07.596312 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:17:07.596436 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:17:07.624334 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:17:07.623657 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:17:07.634256 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:17:07.659472 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:17:07.707065 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:17:07.738448 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:17:07.829086 systemd-networkd[749]: lo: Link UP Aug 13 07:17:07.829096 systemd-networkd[749]: lo: Gained carrier Aug 13 07:17:07.836514 systemd-networkd[749]: Enumeration completed Aug 13 07:17:07.836845 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:17:07.837747 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:17:07.837757 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:17:07.839941 systemd-networkd[749]: eth0: Link UP Aug 13 07:17:07.913179 ignition[704]: Ignition 2.19.0 Aug 13 07:17:07.839951 systemd-networkd[749]: eth0: Gained carrier Aug 13 07:17:07.913191 ignition[704]: Stage: fetch-offline Aug 13 07:17:07.839970 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:17:07.913273 ignition[704]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:07.858477 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.37/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 13 07:17:07.913290 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:17:07.903466 systemd[1]: Reached target network.target - Network. Aug 13 07:17:07.913575 ignition[704]: parsed url from cmdline: "" Aug 13 07:17:07.915784 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:17:07.913582 ignition[704]: no config URL provided Aug 13 07:17:07.945369 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:17:07.913593 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:17:07.979805 unknown[759]: fetched base config from "system" Aug 13 07:17:07.913607 ignition[704]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:17:07.979819 unknown[759]: fetched base config from "system" Aug 13 07:17:07.913618 ignition[704]: failed to fetch config: resource requires networking Aug 13 07:17:07.979830 unknown[759]: fetched user config from "gcp" Aug 13 07:17:07.913927 ignition[704]: Ignition finished successfully Aug 13 07:17:07.982381 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:17:07.966038 ignition[759]: Ignition 2.19.0 Aug 13 07:17:08.011353 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:17:07.966061 ignition[759]: Stage: fetch Aug 13 07:17:08.046751 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:17:07.966287 ignition[759]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:08.070335 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:17:07.966300 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:17:08.110863 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:17:07.966420 ignition[759]: parsed url from cmdline: "" Aug 13 07:17:08.127366 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:17:07.966427 ignition[759]: no config URL provided Aug 13 07:17:08.136520 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:17:07.966437 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:17:08.152514 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:17:07.966450 ignition[759]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:17:08.186476 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:17:07.966474 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Aug 13 07:17:08.203409 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:17:07.971367 ignition[759]: GET result: OK Aug 13 07:17:08.228280 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:17:07.971475 ignition[759]: parsing config with SHA512: 54ba009c52c5e3289e8fd49ac8697340656b783906f39f93ec17438e4ec3dbc64775ad81f4db47b8d501662021f7f66f2d9029543a5070ef8793d04109d16cd2 Aug 13 07:17:07.980267 ignition[759]: fetch: fetch complete Aug 13 07:17:07.980274 ignition[759]: fetch: fetch passed Aug 13 07:17:07.980331 ignition[759]: Ignition finished successfully Aug 13 07:17:08.033768 ignition[766]: Ignition 2.19.0 Aug 13 07:17:08.033778 ignition[766]: Stage: kargs Aug 13 07:17:08.034067 ignition[766]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:08.034087 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:17:08.035093 ignition[766]: kargs: kargs passed Aug 13 07:17:08.035151 ignition[766]: Ignition finished successfully Aug 13 07:17:08.107982 ignition[771]: Ignition 2.19.0 Aug 13 07:17:08.107991 ignition[771]: Stage: disks Aug 13 07:17:08.108242 ignition[771]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:08.108256 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:17:08.109357 ignition[771]: disks: disks passed Aug 13 07:17:08.109546 ignition[771]: Ignition finished successfully Aug 13 07:17:08.294949 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 13 07:17:08.474245 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:17:08.494355 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:17:08.641086 kernel: EXT4-fs (sda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:17:08.641760 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:17:08.642675 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:17:08.666293 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:17:08.691594 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:17:08.700815 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:17:08.765269 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (788) Aug 13 07:17:08.765326 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:17:08.765355 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:17:08.765393 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:17:08.765434 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:17:08.765459 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:17:08.700902 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:17:08.700954 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:17:08.778456 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:17:08.802620 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:17:08.818353 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:17:08.893236 systemd-networkd[749]: eth0: Gained IPv6LL Aug 13 07:17:08.959687 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:17:08.972687 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:17:08.984230 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:17:08.994239 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:17:09.146545 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:17:09.152241 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:17:09.180301 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:17:09.200130 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:17:09.210541 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:17:09.260608 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:17:09.269263 ignition[900]: INFO : Ignition 2.19.0 Aug 13 07:17:09.269263 ignition[900]: INFO : Stage: mount Aug 13 07:17:09.269263 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:09.269263 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:17:09.269263 ignition[900]: INFO : mount: mount passed Aug 13 07:17:09.269263 ignition[900]: INFO : Ignition finished successfully Aug 13 07:17:09.279693 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:17:09.302224 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:17:09.368342 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:17:09.393088 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (912) Aug 13 07:17:09.410856 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:17:09.410964 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:17:09.411009 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:17:09.433822 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:17:09.433924 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:17:09.437661 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:17:09.479991 ignition[929]: INFO : Ignition 2.19.0 Aug 13 07:17:09.479991 ignition[929]: INFO : Stage: files Aug 13 07:17:09.495247 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:09.495247 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:17:09.495247 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:17:09.495247 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:17:09.495247 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:17:09.495247 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:17:09.495247 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:17:09.495247 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:17:09.491734 unknown[929]: wrote ssh authorized keys file for user: core Aug 13 07:17:09.597237 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:17:09.597237 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 07:17:09.631300 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:17:09.972739 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:17:09.989268 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 07:17:10.439817 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:17:10.794569 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:17:10.794569 ignition[929]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 07:17:10.813492 ignition[929]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:17:10.813492 ignition[929]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:17:10.813492 ignition[929]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 07:17:10.813492 ignition[929]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:17:10.813492 ignition[929]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:17:10.813492 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:17:10.813492 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:17:10.813492 ignition[929]: INFO : files: files passed Aug 13 07:17:10.813492 ignition[929]: INFO : Ignition finished successfully Aug 13 07:17:10.799289 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:17:10.839353 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:17:10.874314 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:17:10.888892 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:17:11.053293 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:17:11.053293 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:17:10.889018 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:17:11.102291 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:17:10.949655 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:17:10.976626 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:17:10.999323 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:17:11.070742 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:17:11.070867 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:17:11.093217 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:17:11.112470 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:17:11.137524 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:17:11.143490 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:17:11.193649 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:17:11.221454 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:17:11.257651 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:17:11.271403 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:17:11.294461 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:17:11.306532 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:17:11.306618 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:17:11.370328 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:17:11.381469 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:17:11.398503 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:17:11.414529 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:17:11.453268 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:17:11.453560 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:17:11.472534 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:17:11.510266 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:17:11.510539 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:17:11.537253 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:17:11.547454 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:17:11.547541 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:17:11.578524 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:17:11.588483 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:17:11.606480 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:17:11.606582 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:17:11.626481 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:17:11.626568 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:17:11.665584 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:17:11.665669 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:17:11.674532 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:17:11.674612 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:17:11.742303 ignition[982]: INFO : Ignition 2.19.0 Aug 13 07:17:11.742303 ignition[982]: INFO : Stage: umount Aug 13 07:17:11.742303 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:11.742303 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:17:11.742303 ignition[982]: INFO : umount: umount passed Aug 13 07:17:11.742303 ignition[982]: INFO : Ignition finished successfully Aug 13 07:17:11.701213 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:17:11.756237 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:17:11.767298 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:17:11.767436 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:17:11.788398 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:17:11.788504 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:17:11.840090 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:17:11.841118 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:17:11.841294 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:17:11.860774 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:17:11.860897 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:17:11.879786 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:17:11.879905 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:17:11.901925 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:17:11.902097 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:17:11.919502 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:17:11.919581 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:17:11.929542 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:17:11.929612 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:17:11.946621 systemd[1]: Stopped target network.target - Network. Aug 13 07:17:11.962459 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:17:11.962554 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:17:11.979600 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:17:12.000468 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:17:12.004186 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:17:12.027265 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:17:12.043289 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:17:12.052544 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:17:12.052612 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:17:12.068566 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:17:12.068645 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:17:12.085551 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:17:12.085635 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:17:12.103569 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:17:12.103646 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:17:12.140509 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:17:12.140594 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:17:12.167792 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:17:12.173172 systemd-networkd[749]: eth0: DHCPv6 lease lost Aug 13 07:17:12.186482 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:17:12.206911 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:17:12.207125 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:17:12.223187 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:17:12.223425 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:17:12.241241 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:17:12.241298 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:17:12.267211 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:17:12.295239 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:17:12.295391 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:17:12.315377 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:17:12.315467 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:17:12.334365 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:17:12.334476 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:17:12.352373 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:17:12.352467 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:17:12.373630 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:17:12.387582 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:17:12.387805 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:17:12.810400 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 07:17:12.421844 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:17:12.422008 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:17:12.443182 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:17:12.443282 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:17:12.461353 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:17:12.461424 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:17:12.481340 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:17:12.481463 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:17:12.510262 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:17:12.510391 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:17:12.539255 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:17:12.539391 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:12.574323 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:17:12.586372 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:17:12.586454 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:17:12.615444 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:17:12.615524 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:17:12.637467 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:17:12.637547 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:17:12.658468 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:17:12.658550 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:12.671134 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:17:12.671263 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:17:12.689144 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:17:12.712343 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:17:12.757598 systemd[1]: Switching root. Aug 13 07:17:13.084356 systemd-journald[183]: Journal stopped Aug 13 07:17:15.702687 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:17:15.702749 kernel: SELinux: policy capability open_perms=1 Aug 13 07:17:15.702772 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:17:15.702790 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:17:15.702807 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:17:15.702825 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:17:15.702846 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:17:15.702869 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:17:15.702894 kernel: audit: type=1403 audit(1755069433.465:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:17:15.702917 systemd[1]: Successfully loaded SELinux policy in 100.220ms. Aug 13 07:17:15.702940 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.831ms. Aug 13 07:17:15.702963 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:17:15.702984 systemd[1]: Detected virtualization google. Aug 13 07:17:15.703004 systemd[1]: Detected architecture x86-64. Aug 13 07:17:15.703031 systemd[1]: Detected first boot. Aug 13 07:17:15.703067 systemd[1]: Initializing machine ID from random generator. Aug 13 07:17:15.703090 zram_generator::config[1023]: No configuration found. Aug 13 07:17:15.703112 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:17:15.703137 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:17:15.703163 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:17:15.703185 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:17:15.703207 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:17:15.703229 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:17:15.703250 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:17:15.703274 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:17:15.703296 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:17:15.703321 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:17:15.703343 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:17:15.703365 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:17:15.703387 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:17:15.703409 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:17:15.703430 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:17:15.703452 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:17:15.703474 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:17:15.703500 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:17:15.703522 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:17:15.703544 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:17:15.703566 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:17:15.703588 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:17:15.703611 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:17:15.703639 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:17:15.703662 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:17:15.703685 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:17:15.703711 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:17:15.703733 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:17:15.703754 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:17:15.703777 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:17:15.703799 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:17:15.703821 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:17:15.703844 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:17:15.703872 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:17:15.703903 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:17:15.703926 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:17:15.703949 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:17:15.703973 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:17:15.704000 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:17:15.704024 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:17:15.704059 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:17:15.704083 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:17:15.704106 systemd[1]: Reached target machines.target - Containers. Aug 13 07:17:15.704130 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:17:15.704153 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:17:15.704176 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:17:15.704203 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:17:15.704226 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:17:15.704249 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:17:15.704272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:17:15.704295 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:17:15.704318 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:17:15.704340 kernel: ACPI: bus type drm_connector registered Aug 13 07:17:15.704362 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:17:15.704390 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:17:15.704413 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:17:15.704436 kernel: fuse: init (API version 7.39) Aug 13 07:17:15.704456 kernel: loop: module loaded Aug 13 07:17:15.704478 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:17:15.704501 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:17:15.704524 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:17:15.704547 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:17:15.704602 systemd-journald[1110]: Collecting audit messages is disabled. Aug 13 07:17:15.704842 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:17:15.704920 systemd-journald[1110]: Journal started Aug 13 07:17:15.704996 systemd-journald[1110]: Runtime Journal (/run/log/journal/6c1e41f921584bf49fbe23127d6aac7c) is 8.0M, max 148.7M, 140.7M free. Aug 13 07:17:14.443987 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:17:14.466808 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 07:17:14.467461 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:17:15.738137 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:17:15.768095 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:17:15.790133 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:17:15.790241 systemd[1]: Stopped verity-setup.service. Aug 13 07:17:15.816290 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:17:15.827404 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:17:15.838915 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:17:15.849661 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:17:15.860629 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:17:15.870670 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:17:15.880630 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:17:15.890641 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:17:15.901901 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:17:15.913945 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:17:15.925948 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:17:15.926265 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:17:15.938876 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:17:15.939187 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:17:15.951879 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:17:15.952176 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:17:15.962751 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:17:15.963028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:17:15.974746 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:17:15.974989 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:17:15.985706 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:17:15.985954 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:17:15.996710 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:17:16.007912 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:17:16.019754 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:17:16.031858 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:17:16.058391 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:17:16.074301 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:17:16.100229 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:17:16.110412 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:17:16.110745 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:17:16.123151 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:17:16.144475 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:17:16.157825 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:17:16.168511 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:17:16.177930 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:17:16.196066 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:17:16.208396 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:17:16.217648 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:17:16.229348 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:17:16.242533 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:17:16.261449 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:17:16.267837 systemd-journald[1110]: Time spent on flushing to /var/log/journal/6c1e41f921584bf49fbe23127d6aac7c is 77.743ms for 930 entries. Aug 13 07:17:16.267837 systemd-journald[1110]: System Journal (/var/log/journal/6c1e41f921584bf49fbe23127d6aac7c) is 8.0M, max 584.8M, 576.8M free. Aug 13 07:17:16.397457 systemd-journald[1110]: Received client request to flush runtime journal. Aug 13 07:17:16.397550 kernel: loop0: detected capacity change from 0 to 224512 Aug 13 07:17:16.294183 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:17:16.312588 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:17:16.329962 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:17:16.342582 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:17:16.354802 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:17:16.372894 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:17:16.397038 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:17:16.412294 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:17:16.425433 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:17:16.438508 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:17:16.471162 udevadm[1143]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:17:16.476093 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:17:16.515514 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. Aug 13 07:17:16.515549 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. Aug 13 07:17:16.521793 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:17:16.527592 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:17:16.528403 kernel: loop1: detected capacity change from 0 to 142488 Aug 13 07:17:16.539934 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:17:16.565412 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:17:16.634400 kernel: loop2: detected capacity change from 0 to 54824 Aug 13 07:17:16.712465 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:17:16.735510 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:17:16.747080 kernel: loop3: detected capacity change from 0 to 140768 Aug 13 07:17:16.805163 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Aug 13 07:17:16.805792 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Aug 13 07:17:16.818313 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:17:16.863134 kernel: loop4: detected capacity change from 0 to 224512 Aug 13 07:17:16.914085 kernel: loop5: detected capacity change from 0 to 142488 Aug 13 07:17:16.972121 kernel: loop6: detected capacity change from 0 to 54824 Aug 13 07:17:17.003582 kernel: loop7: detected capacity change from 0 to 140768 Aug 13 07:17:17.068888 (sd-merge)[1168]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Aug 13 07:17:17.070218 (sd-merge)[1168]: Merged extensions into '/usr'. Aug 13 07:17:17.078496 systemd[1]: Reloading requested from client PID 1141 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:17:17.078695 systemd[1]: Reloading... Aug 13 07:17:17.233074 zram_generator::config[1190]: No configuration found. Aug 13 07:17:17.547487 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:17:17.595712 ldconfig[1136]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:17:17.666636 systemd[1]: Reloading finished in 587 ms. Aug 13 07:17:17.696975 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:17:17.707751 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:17:17.731417 systemd[1]: Starting ensure-sysext.service... Aug 13 07:17:17.749578 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:17:17.773302 systemd[1]: Reloading requested from client PID 1234 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:17:17.773528 systemd[1]: Reloading... Aug 13 07:17:17.813542 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:17:17.815552 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:17:17.817901 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:17:17.820639 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Aug 13 07:17:17.820769 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Aug 13 07:17:17.835821 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:17:17.839130 systemd-tmpfiles[1235]: Skipping /boot Aug 13 07:17:17.889668 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:17:17.892320 systemd-tmpfiles[1235]: Skipping /boot Aug 13 07:17:17.947123 zram_generator::config[1262]: No configuration found. Aug 13 07:17:18.092675 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:17:18.162534 systemd[1]: Reloading finished in 388 ms. Aug 13 07:17:18.188714 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:17:18.206839 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:17:18.231537 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:17:18.255588 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:17:18.275627 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:17:18.296806 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:17:18.318237 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:17:18.327346 augenrules[1324]: No rules Aug 13 07:17:18.338206 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:17:18.352487 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:17:18.369893 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:17:18.370384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:17:18.378525 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:17:18.397527 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:17:18.399993 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Aug 13 07:17:18.419257 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:17:18.430818 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:17:18.442019 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:17:18.452177 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:17:18.456326 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:17:18.468898 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:17:18.481824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:17:18.482773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:17:18.495934 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:17:18.507974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:17:18.509130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:17:18.522623 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:17:18.523320 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:17:18.552212 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:17:18.593246 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:17:18.633306 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:17:18.635416 systemd[1]: Finished ensure-sysext.service. Aug 13 07:17:18.649946 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:17:18.651473 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:17:18.662355 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:17:18.682379 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:17:18.703132 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:17:18.725312 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:17:18.735329 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 13 07:17:18.748378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:17:18.756290 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:17:18.768553 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:17:18.788379 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:17:18.799227 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:17:18.799293 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:17:18.800498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:17:18.801772 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:17:18.814970 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:17:18.816180 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:17:18.827756 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:17:18.828013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:17:18.839742 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:17:18.840685 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:17:18.860437 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:17:18.880801 systemd-resolved[1319]: Positive Trust Anchors: Aug 13 07:17:18.882184 systemd-resolved[1319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:17:18.882261 systemd-resolved[1319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:17:18.902500 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 13 07:17:18.911596 systemd-resolved[1319]: Defaulting to hostname 'linux'. Aug 13 07:17:18.927337 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Aug 13 07:17:18.937281 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:17:18.937782 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:17:18.938025 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:17:18.952135 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:17:18.968012 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1352) Aug 13 07:17:19.051353 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 07:17:19.075090 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 07:17:19.075487 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:19.097270 systemd-networkd[1377]: lo: Link UP Aug 13 07:17:19.097809 systemd-networkd[1377]: lo: Gained carrier Aug 13 07:17:19.101859 systemd-networkd[1377]: Enumeration completed Aug 13 07:17:19.102705 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:17:19.106085 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:17:19.106280 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:17:19.109022 systemd-networkd[1377]: eth0: Link UP Aug 13 07:17:19.109600 systemd-networkd[1377]: eth0: Gained carrier Aug 13 07:17:19.109978 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:17:19.113353 systemd[1]: Reached target network.target - Network. Aug 13 07:17:19.121195 systemd-networkd[1377]: eth0: DHCPv4 address 10.128.0.37/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 13 07:17:19.129386 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:17:19.131639 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Aug 13 07:17:19.163129 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Aug 13 07:17:19.201889 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:17:19.206230 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:17:19.206361 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Aug 13 07:17:19.225466 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 13 07:17:19.230220 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:17:19.230270 kernel: ACPI: button: Sleep Button [SLPF] Aug 13 07:17:19.251369 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:17:19.269603 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:17:19.278625 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:17:19.303685 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:17:19.305570 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:17:19.330026 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:19.356423 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:17:19.368662 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:17:19.378312 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:17:19.388434 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:17:19.400350 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:17:19.411529 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:17:19.421488 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:17:19.433317 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:17:19.445285 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:17:19.445360 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:17:19.454275 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:17:19.465556 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:17:19.477056 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:17:19.500204 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:17:19.520502 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:17:19.527033 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:17:19.533445 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:17:19.543498 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:17:19.553293 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:17:19.562438 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:17:19.562502 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:17:19.568307 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:17:19.586002 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:17:19.605743 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:17:19.627368 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:17:19.651343 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:17:19.661243 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:17:19.673159 jq[1425]: false Aug 13 07:17:19.675390 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:17:19.694669 systemd[1]: Started ntpd.service - Network Time Service. Aug 13 07:17:19.712261 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:17:19.718290 coreos-metadata[1423]: Aug 13 07:17:19.716 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Aug 13 07:17:19.721276 coreos-metadata[1423]: Aug 13 07:17:19.721 INFO Fetch successful Aug 13 07:17:19.721444 coreos-metadata[1423]: Aug 13 07:17:19.721 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Aug 13 07:17:19.731236 coreos-metadata[1423]: Aug 13 07:17:19.731 INFO Fetch successful Aug 13 07:17:19.731236 coreos-metadata[1423]: Aug 13 07:17:19.731 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Aug 13 07:17:19.731437 coreos-metadata[1423]: Aug 13 07:17:19.731 INFO Fetch successful Aug 13 07:17:19.731437 coreos-metadata[1423]: Aug 13 07:17:19.731 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Aug 13 07:17:19.731437 coreos-metadata[1423]: Aug 13 07:17:19.731 INFO Fetch successful Aug 13 07:17:19.731546 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:17:19.739092 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:17:19.752198 extend-filesystems[1426]: Found loop4 Aug 13 07:17:19.752198 extend-filesystems[1426]: Found loop5 Aug 13 07:17:19.752198 extend-filesystems[1426]: Found loop6 Aug 13 07:17:19.752198 extend-filesystems[1426]: Found loop7 Aug 13 07:17:19.752198 extend-filesystems[1426]: Found sda Aug 13 07:17:19.798419 extend-filesystems[1426]: Found sda1 Aug 13 07:17:19.798419 extend-filesystems[1426]: Found sda2 Aug 13 07:17:19.798419 extend-filesystems[1426]: Found sda3 Aug 13 07:17:19.798419 extend-filesystems[1426]: Found usr Aug 13 07:17:19.798419 extend-filesystems[1426]: Found sda4 Aug 13 07:17:19.798419 extend-filesystems[1426]: Found sda6 Aug 13 07:17:19.798419 extend-filesystems[1426]: Found sda7 Aug 13 07:17:19.798419 extend-filesystems[1426]: Found sda9 Aug 13 07:17:19.798419 extend-filesystems[1426]: Checking size of /dev/sda9 Aug 13 07:17:19.942260 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Aug 13 07:17:19.942351 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Aug 13 07:17:19.942394 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1350) Aug 13 07:17:19.765389 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:17:19.781955 dbus-daemon[1424]: [system] SELinux support is enabled Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:10 UTC 2025 (1): Starting Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: ---------------------------------------------------- Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: ntp-4 is maintained by Network Time Foundation, Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: corporation. Support and training for ntp-4 are Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: available at https://www.nwtime.org/support Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: ---------------------------------------------------- Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: proto: precision = 0.081 usec (-23) Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: basedate set to 2025-07-31 Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: gps base set to 2025-08-03 (week 2378) Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: Listen normally on 3 eth0 10.128.0.37:123 Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: Listen normally on 4 lo [::1]:123 Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: bind(21) AF_INET6 fe80::4001:aff:fe80:25%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:25%2#123 Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: failed to init interface for address fe80::4001:aff:fe80:25%2 Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: Listening on routing socket on fd #21 for interface updates Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:17:19.948589 ntpd[1430]: 13 Aug 07:17:19 ntpd[1430]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:17:19.963287 extend-filesystems[1426]: Resized partition /dev/sda9 Aug 13 07:17:19.788935 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Aug 13 07:17:19.785503 dbus-daemon[1424]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1377 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 07:17:19.982779 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:17:19.982779 extend-filesystems[1451]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 07:17:19.982779 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 2 Aug 13 07:17:19.982779 extend-filesystems[1451]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Aug 13 07:17:19.789816 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:17:19.799991 ntpd[1430]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:10 UTC 2025 (1): Starting Aug 13 07:17:20.015561 extend-filesystems[1426]: Resized filesystem in /dev/sda9 Aug 13 07:17:19.796386 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:17:19.800027 ntpd[1430]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 07:17:19.820245 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:17:19.800073 ntpd[1430]: ---------------------------------------------------- Aug 13 07:17:19.849901 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:17:20.047374 jq[1449]: true Aug 13 07:17:19.800089 ntpd[1430]: ntp-4 is maintained by Network Time Foundation, Aug 13 07:17:19.871787 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:17:20.047826 update_engine[1445]: I20250813 07:17:19.977641 1445 main.cc:92] Flatcar Update Engine starting Aug 13 07:17:20.047826 update_engine[1445]: I20250813 07:17:19.987980 1445 update_check_scheduler.cc:74] Next update check in 9m24s Aug 13 07:17:19.800104 ntpd[1430]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 07:17:19.914536 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:17:19.800118 ntpd[1430]: corporation. Support and training for ntp-4 are Aug 13 07:17:19.914819 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:17:20.079421 jq[1461]: true Aug 13 07:17:19.800133 ntpd[1430]: available at https://www.nwtime.org/support Aug 13 07:17:19.915371 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:17:19.800147 ntpd[1430]: ---------------------------------------------------- Aug 13 07:17:19.915653 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:17:19.808523 ntpd[1430]: proto: precision = 0.081 usec (-23) Aug 13 07:17:19.944609 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:17:19.808983 ntpd[1430]: basedate set to 2025-07-31 Aug 13 07:17:19.946203 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:17:19.809005 ntpd[1430]: gps base set to 2025-08-03 (week 2378) Aug 13 07:17:19.957993 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:17:19.828866 ntpd[1430]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 07:17:19.960280 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:17:19.828970 ntpd[1430]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 07:17:20.070474 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:17:19.829410 ntpd[1430]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 07:17:20.087474 systemd-logind[1441]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 07:17:19.833116 ntpd[1430]: Listen normally on 3 eth0 10.128.0.37:123 Aug 13 07:17:20.087514 systemd-logind[1441]: Watching system buttons on /dev/input/event3 (Sleep Button) Aug 13 07:17:19.833223 ntpd[1430]: Listen normally on 4 lo [::1]:123 Aug 13 07:17:20.087549 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:17:19.833347 ntpd[1430]: bind(21) AF_INET6 fe80::4001:aff:fe80:25%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 07:17:20.089849 systemd-logind[1441]: New seat seat0. Aug 13 07:17:19.833384 ntpd[1430]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:25%2#123 Aug 13 07:17:20.096413 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:17:19.833407 ntpd[1430]: failed to init interface for address fe80::4001:aff:fe80:25%2 Aug 13 07:17:19.833463 ntpd[1430]: Listening on routing socket on fd #21 for interface updates Aug 13 07:17:19.851762 ntpd[1430]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:17:19.851803 ntpd[1430]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:17:20.023321 dbus-daemon[1424]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 07:17:20.110001 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:17:20.134245 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:17:20.134565 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:17:20.158597 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 07:17:20.169260 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:17:20.169570 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:17:20.191538 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:17:20.196648 tar[1459]: linux-amd64/LICENSE Aug 13 07:17:20.196648 tar[1459]: linux-amd64/helm Aug 13 07:17:20.210777 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:17:20.227264 systemd-networkd[1377]: eth0: Gained IPv6LL Aug 13 07:17:20.246826 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:17:20.259514 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:17:20.263427 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:17:20.260495 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:17:20.273308 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:17:20.293255 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:17:20.319011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:20.336465 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:17:20.350509 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Aug 13 07:17:20.361982 systemd[1]: Starting sshkeys.service... Aug 13 07:17:20.426086 init.sh[1500]: + '[' -e /etc/default/instance_configs.cfg.template ']' Aug 13 07:17:20.426086 init.sh[1500]: + echo -e '[InstanceSetup]\nset_host_keys = false' Aug 13 07:17:20.426086 init.sh[1500]: + /usr/bin/google_instance_setup Aug 13 07:17:20.474023 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:17:20.479899 dbus-daemon[1424]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 07:17:20.480156 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 07:17:20.483397 dbus-daemon[1424]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1488 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 07:17:20.502790 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 07:17:20.524299 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 07:17:20.545592 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 07:17:20.555505 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:17:20.756586 polkitd[1515]: Started polkitd version 121 Aug 13 07:17:20.789086 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:17:20.802482 coreos-metadata[1512]: Aug 13 07:17:20.801 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Aug 13 07:17:20.812419 coreos-metadata[1512]: Aug 13 07:17:20.810 INFO Fetch failed with 404: resource not found Aug 13 07:17:20.812419 coreos-metadata[1512]: Aug 13 07:17:20.810 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Aug 13 07:17:20.812419 coreos-metadata[1512]: Aug 13 07:17:20.810 INFO Fetch successful Aug 13 07:17:20.812419 coreos-metadata[1512]: Aug 13 07:17:20.810 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Aug 13 07:17:20.812419 coreos-metadata[1512]: Aug 13 07:17:20.810 INFO Fetch failed with 404: resource not found Aug 13 07:17:20.812419 coreos-metadata[1512]: Aug 13 07:17:20.810 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Aug 13 07:17:20.812419 coreos-metadata[1512]: Aug 13 07:17:20.810 INFO Fetch failed with 404: resource not found Aug 13 07:17:20.812419 coreos-metadata[1512]: Aug 13 07:17:20.810 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Aug 13 07:17:20.812419 coreos-metadata[1512]: Aug 13 07:17:20.810 INFO Fetch successful Aug 13 07:17:20.815737 polkitd[1515]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 07:17:20.815890 polkitd[1515]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 07:17:20.818991 unknown[1512]: wrote ssh authorized keys file for user: core Aug 13 07:17:20.839297 polkitd[1515]: Finished loading, compiling and executing 2 rules Aug 13 07:17:20.840578 dbus-daemon[1424]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 07:17:20.841820 polkitd[1515]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 07:17:20.848861 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 07:17:20.869706 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:17:20.889633 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:17:20.908257 systemd[1]: Started sshd@0-10.128.0.37:22-139.178.68.195:50154.service - OpenSSH per-connection server daemon (139.178.68.195:50154). Aug 13 07:17:20.926122 update-ssh-keys[1535]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:17:20.928733 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 07:17:20.937418 systemd-hostnamed[1488]: Hostname set to (transient) Aug 13 07:17:20.951377 systemd-resolved[1319]: System hostname changed to 'ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal'. Aug 13 07:17:20.955178 systemd[1]: Finished sshkeys.service. Aug 13 07:17:20.992987 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:17:20.993376 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:17:21.012268 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:17:21.122449 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:17:21.149017 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:17:21.163631 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:17:21.164281 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:17:21.187288 containerd[1462]: time="2025-08-13T07:17:21.187170993Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:17:21.271625 containerd[1462]: time="2025-08-13T07:17:21.269317149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:17:21.273797 containerd[1462]: time="2025-08-13T07:17:21.273730683Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:17:21.273797 containerd[1462]: time="2025-08-13T07:17:21.273793636Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:17:21.273994 containerd[1462]: time="2025-08-13T07:17:21.273832710Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:17:21.277545 containerd[1462]: time="2025-08-13T07:17:21.274383256Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:17:21.277545 containerd[1462]: time="2025-08-13T07:17:21.274427941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:17:21.277545 containerd[1462]: time="2025-08-13T07:17:21.274525736Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:17:21.277545 containerd[1462]: time="2025-08-13T07:17:21.274547063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:17:21.277545 containerd[1462]: time="2025-08-13T07:17:21.274868659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:17:21.277545 containerd[1462]: time="2025-08-13T07:17:21.274896051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:17:21.277545 containerd[1462]: time="2025-08-13T07:17:21.274919438Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:17:21.277545 containerd[1462]: time="2025-08-13T07:17:21.274937291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:17:21.277545 containerd[1462]: time="2025-08-13T07:17:21.275075523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:17:21.277545 containerd[1462]: time="2025-08-13T07:17:21.275396885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:17:21.277545 containerd[1462]: time="2025-08-13T07:17:21.275630023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:17:21.278444 containerd[1462]: time="2025-08-13T07:17:21.275656940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:17:21.278444 containerd[1462]: time="2025-08-13T07:17:21.275794445Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:17:21.278444 containerd[1462]: time="2025-08-13T07:17:21.275865441Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:17:21.290007 containerd[1462]: time="2025-08-13T07:17:21.289940058Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:17:21.290205 containerd[1462]: time="2025-08-13T07:17:21.290166801Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:17:21.290262 containerd[1462]: time="2025-08-13T07:17:21.290202341Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:17:21.290262 containerd[1462]: time="2025-08-13T07:17:21.290251073Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:17:21.290359 containerd[1462]: time="2025-08-13T07:17:21.290282004Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:17:21.290612 containerd[1462]: time="2025-08-13T07:17:21.290556807Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:17:21.293945 containerd[1462]: time="2025-08-13T07:17:21.293768055Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:17:21.296143 containerd[1462]: time="2025-08-13T07:17:21.296093241Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:17:21.299070 containerd[1462]: time="2025-08-13T07:17:21.298406155Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:17:21.299070 containerd[1462]: time="2025-08-13T07:17:21.298486080Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:17:21.299070 containerd[1462]: time="2025-08-13T07:17:21.298519272Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:17:21.299070 containerd[1462]: time="2025-08-13T07:17:21.298564731Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:17:21.299070 containerd[1462]: time="2025-08-13T07:17:21.298587428Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:17:21.299070 containerd[1462]: time="2025-08-13T07:17:21.298630912Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:17:21.299070 containerd[1462]: time="2025-08-13T07:17:21.298655849Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:17:21.299070 containerd[1462]: time="2025-08-13T07:17:21.298678788Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:17:21.299070 containerd[1462]: time="2025-08-13T07:17:21.298757040Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:17:21.299070 containerd[1462]: time="2025-08-13T07:17:21.298780536Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:17:21.299070 containerd[1462]: time="2025-08-13T07:17:21.298836017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.299070 containerd[1462]: time="2025-08-13T07:17:21.298896229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.299070 containerd[1462]: time="2025-08-13T07:17:21.298919688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.299070 containerd[1462]: time="2025-08-13T07:17:21.298942518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.299811 containerd[1462]: time="2025-08-13T07:17:21.298981343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.299811 containerd[1462]: time="2025-08-13T07:17:21.299004902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.299811 containerd[1462]: time="2025-08-13T07:17:21.299061336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.299811 containerd[1462]: time="2025-08-13T07:17:21.299084736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.299811 containerd[1462]: time="2025-08-13T07:17:21.299105823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.301105 containerd[1462]: time="2025-08-13T07:17:21.300158891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.301105 containerd[1462]: time="2025-08-13T07:17:21.300519159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.301105 containerd[1462]: time="2025-08-13T07:17:21.300679649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.301105 containerd[1462]: time="2025-08-13T07:17:21.300708821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.301105 containerd[1462]: time="2025-08-13T07:17:21.300841105Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:17:21.303584 containerd[1462]: time="2025-08-13T07:17:21.301639131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.303584 containerd[1462]: time="2025-08-13T07:17:21.301724952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.303584 containerd[1462]: time="2025-08-13T07:17:21.301747793Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:17:21.303584 containerd[1462]: time="2025-08-13T07:17:21.302346144Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:17:21.303584 containerd[1462]: time="2025-08-13T07:17:21.302479702Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:17:21.305151 containerd[1462]: time="2025-08-13T07:17:21.302500820Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:17:21.305151 containerd[1462]: time="2025-08-13T07:17:21.304921974Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:17:21.305151 containerd[1462]: time="2025-08-13T07:17:21.304947564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.305151 containerd[1462]: time="2025-08-13T07:17:21.304973502Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:17:21.305151 containerd[1462]: time="2025-08-13T07:17:21.305016807Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:17:21.305151 containerd[1462]: time="2025-08-13T07:17:21.305040231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:17:21.309153 containerd[1462]: time="2025-08-13T07:17:21.308657421Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:17:21.309153 containerd[1462]: time="2025-08-13T07:17:21.308803804Z" level=info msg="Connect containerd service" Aug 13 07:17:21.309153 containerd[1462]: time="2025-08-13T07:17:21.308917099Z" level=info msg="using legacy CRI server" Aug 13 07:17:21.309153 containerd[1462]: time="2025-08-13T07:17:21.308933912Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:17:21.310235 containerd[1462]: time="2025-08-13T07:17:21.309246797Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:17:21.313157 containerd[1462]: time="2025-08-13T07:17:21.312518990Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:17:21.314298 containerd[1462]: time="2025-08-13T07:17:21.314224943Z" level=info msg="Start subscribing containerd event" Aug 13 07:17:21.314391 containerd[1462]: time="2025-08-13T07:17:21.314326242Z" level=info msg="Start recovering state" Aug 13 07:17:21.314458 containerd[1462]: time="2025-08-13T07:17:21.314436644Z" level=info msg="Start event monitor" Aug 13 07:17:21.314509 containerd[1462]: time="2025-08-13T07:17:21.314463836Z" level=info msg="Start snapshots syncer" Aug 13 07:17:21.314509 containerd[1462]: time="2025-08-13T07:17:21.314480868Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:17:21.314509 containerd[1462]: time="2025-08-13T07:17:21.314493585Z" level=info msg="Start streaming server" Aug 13 07:17:21.317151 containerd[1462]: time="2025-08-13T07:17:21.316833677Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:17:21.317151 containerd[1462]: time="2025-08-13T07:17:21.316921502Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:17:21.318857 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:17:21.321277 containerd[1462]: time="2025-08-13T07:17:21.321226023Z" level=info msg="containerd successfully booted in 0.137229s" Aug 13 07:17:21.407411 sshd[1541]: Accepted publickey for core from 139.178.68.195 port 50154 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:17:21.414170 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:21.443655 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:17:21.467497 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:17:21.488650 systemd-logind[1441]: New session 1 of user core. Aug 13 07:17:21.527715 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:17:21.553033 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:17:21.597167 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:17:21.895153 tar[1459]: linux-amd64/README.md Aug 13 07:17:21.912354 systemd[1558]: Queued start job for default target default.target. Aug 13 07:17:21.919938 systemd[1558]: Created slice app.slice - User Application Slice. Aug 13 07:17:21.920057 systemd[1558]: Reached target paths.target - Paths. Aug 13 07:17:21.920087 systemd[1558]: Reached target timers.target - Timers. Aug 13 07:17:21.927012 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:17:21.931257 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:17:21.953004 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:17:21.953540 systemd[1558]: Reached target sockets.target - Sockets. Aug 13 07:17:21.953582 systemd[1558]: Reached target basic.target - Basic System. Aug 13 07:17:21.953740 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:17:21.953905 systemd[1558]: Reached target default.target - Main User Target. Aug 13 07:17:21.953975 systemd[1558]: Startup finished in 336ms. Aug 13 07:17:21.974232 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:17:22.039408 instance-setup[1504]: INFO Running google_set_multiqueue. Aug 13 07:17:22.060297 instance-setup[1504]: INFO Set channels for eth0 to 2. Aug 13 07:17:22.065945 instance-setup[1504]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Aug 13 07:17:22.068589 instance-setup[1504]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Aug 13 07:17:22.068655 instance-setup[1504]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Aug 13 07:17:22.070535 instance-setup[1504]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Aug 13 07:17:22.071206 instance-setup[1504]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Aug 13 07:17:22.073257 instance-setup[1504]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Aug 13 07:17:22.074496 instance-setup[1504]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Aug 13 07:17:22.076235 instance-setup[1504]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Aug 13 07:17:22.085963 instance-setup[1504]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Aug 13 07:17:22.090575 instance-setup[1504]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Aug 13 07:17:22.092433 instance-setup[1504]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Aug 13 07:17:22.092490 instance-setup[1504]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Aug 13 07:17:22.125171 init.sh[1500]: + /usr/bin/google_metadata_script_runner --script-type startup Aug 13 07:17:22.225608 systemd[1]: Started sshd@1-10.128.0.37:22-139.178.68.195:38280.service - OpenSSH per-connection server daemon (139.178.68.195:38280). Aug 13 07:17:22.391938 startup-script[1601]: INFO Starting startup scripts. Aug 13 07:17:22.400816 startup-script[1601]: INFO No startup scripts found in metadata. Aug 13 07:17:22.400892 startup-script[1601]: INFO Finished running startup scripts. Aug 13 07:17:22.431306 init.sh[1500]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Aug 13 07:17:22.431306 init.sh[1500]: + daemon_pids=() Aug 13 07:17:22.431535 init.sh[1500]: + for d in accounts clock_skew network Aug 13 07:17:22.432109 init.sh[1500]: + daemon_pids+=($!) Aug 13 07:17:22.432109 init.sh[1500]: + for d in accounts clock_skew network Aug 13 07:17:22.432261 init.sh[1607]: + /usr/bin/google_accounts_daemon Aug 13 07:17:22.432624 init.sh[1500]: + daemon_pids+=($!) Aug 13 07:17:22.432624 init.sh[1500]: + for d in accounts clock_skew network Aug 13 07:17:22.432624 init.sh[1500]: + daemon_pids+=($!) Aug 13 07:17:22.432624 init.sh[1500]: + NOTIFY_SOCKET=/run/systemd/notify Aug 13 07:17:22.432624 init.sh[1500]: + /usr/bin/systemd-notify --ready Aug 13 07:17:22.433332 init.sh[1609]: + /usr/bin/google_network_daemon Aug 13 07:17:22.435114 init.sh[1608]: + /usr/bin/google_clock_skew_daemon Aug 13 07:17:22.463691 systemd[1]: Started oem-gce.service - GCE Linux Agent. Aug 13 07:17:22.478651 init.sh[1500]: + wait -n 1607 1608 1609 Aug 13 07:17:22.602284 sshd[1603]: Accepted publickey for core from 139.178.68.195 port 38280 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:17:22.603726 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:22.622615 systemd-logind[1441]: New session 2 of user core. Aug 13 07:17:22.625314 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:17:22.800733 ntpd[1430]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:25%2]:123 Aug 13 07:17:22.802036 ntpd[1430]: 13 Aug 07:17:22 ntpd[1430]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:25%2]:123 Aug 13 07:17:22.838365 sshd[1603]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:22.853142 systemd[1]: sshd@1-10.128.0.37:22-139.178.68.195:38280.service: Deactivated successfully. Aug 13 07:17:22.858477 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:17:22.861248 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:17:22.863628 google-clock-skew[1608]: INFO Starting Google Clock Skew daemon. Aug 13 07:17:22.866837 systemd-logind[1441]: Removed session 2. Aug 13 07:17:22.883458 google-clock-skew[1608]: INFO Clock drift token has changed: 14674267442648900542. Aug 13 07:17:22.905198 systemd[1]: Started sshd@2-10.128.0.37:22-139.178.68.195:38288.service - OpenSSH per-connection server daemon (139.178.68.195:38288). Aug 13 07:17:22.915369 google-networking[1609]: INFO Starting Google Networking daemon. Aug 13 07:17:23.000690 systemd-resolved[1319]: Clock change detected. Flushing caches. Aug 13 07:17:23.004189 google-clock-skew[1608]: INFO Synced system time with hardware clock. Aug 13 07:17:23.047608 groupadd[1626]: group added to /etc/group: name=google-sudoers, GID=1000 Aug 13 07:17:23.051821 groupadd[1626]: group added to /etc/gshadow: name=google-sudoers Aug 13 07:17:23.109691 groupadd[1626]: new group: name=google-sudoers, GID=1000 Aug 13 07:17:23.143887 google-accounts[1607]: INFO Starting Google Accounts daemon. Aug 13 07:17:23.157646 google-accounts[1607]: WARNING OS Login not installed. Aug 13 07:17:23.159350 google-accounts[1607]: INFO Creating a new user account for 0. Aug 13 07:17:23.164101 init.sh[1634]: useradd: invalid user name '0': use --badname to ignore Aug 13 07:17:23.164686 google-accounts[1607]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Aug 13 07:17:23.282262 sshd[1623]: Accepted publickey for core from 139.178.68.195 port 38288 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:17:23.285063 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:23.294271 systemd-logind[1441]: New session 3 of user core. Aug 13 07:17:23.298793 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:17:23.361679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:23.373607 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:17:23.378195 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:17:23.385411 systemd[1]: Startup finished in 1.075s (kernel) + 9.672s (initrd) + 9.951s (userspace) = 20.698s. Aug 13 07:17:23.499821 sshd[1623]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:23.505874 systemd[1]: sshd@2-10.128.0.37:22-139.178.68.195:38288.service: Deactivated successfully. Aug 13 07:17:23.509368 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:17:23.511751 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:17:23.514346 systemd-logind[1441]: Removed session 3. Aug 13 07:17:24.309314 kubelet[1642]: E0813 07:17:24.309227 1642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:17:24.312146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:17:24.312435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:17:24.313090 systemd[1]: kubelet.service: Consumed 1.317s CPU time. Aug 13 07:17:33.564184 systemd[1]: Started sshd@3-10.128.0.37:22-139.178.68.195:58964.service - OpenSSH per-connection server daemon (139.178.68.195:58964). Aug 13 07:17:33.851662 sshd[1657]: Accepted publickey for core from 139.178.68.195 port 58964 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:17:33.854151 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:33.861346 systemd-logind[1441]: New session 4 of user core. Aug 13 07:17:33.868018 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:17:34.068938 sshd[1657]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:34.074275 systemd[1]: sshd@3-10.128.0.37:22-139.178.68.195:58964.service: Deactivated successfully. Aug 13 07:17:34.077296 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:17:34.079464 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:17:34.081117 systemd-logind[1441]: Removed session 4. Aug 13 07:17:34.126226 systemd[1]: Started sshd@4-10.128.0.37:22-139.178.68.195:58980.service - OpenSSH per-connection server daemon (139.178.68.195:58980). Aug 13 07:17:34.362151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:17:34.373898 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:34.422834 sshd[1664]: Accepted publickey for core from 139.178.68.195 port 58980 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:17:34.424447 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:34.431974 systemd-logind[1441]: New session 5 of user core. Aug 13 07:17:34.439782 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:17:34.631981 sshd[1664]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:34.637567 systemd[1]: sshd@4-10.128.0.37:22-139.178.68.195:58980.service: Deactivated successfully. Aug 13 07:17:34.640098 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:17:34.641063 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:17:34.642492 systemd-logind[1441]: Removed session 5. Aug 13 07:17:34.692006 systemd[1]: Started sshd@5-10.128.0.37:22-139.178.68.195:58996.service - OpenSSH per-connection server daemon (139.178.68.195:58996). Aug 13 07:17:34.738592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:34.745333 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:17:34.802850 kubelet[1681]: E0813 07:17:34.802728 1681 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:17:34.808575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:17:34.808827 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:17:34.991760 sshd[1674]: Accepted publickey for core from 139.178.68.195 port 58996 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:17:34.993708 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:35.000167 systemd-logind[1441]: New session 6 of user core. Aug 13 07:17:35.006786 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:17:35.208041 sshd[1674]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:35.213560 systemd[1]: sshd@5-10.128.0.37:22-139.178.68.195:58996.service: Deactivated successfully. Aug 13 07:17:35.215887 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:17:35.216835 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:17:35.218196 systemd-logind[1441]: Removed session 6. Aug 13 07:17:35.262981 systemd[1]: Started sshd@6-10.128.0.37:22-139.178.68.195:59004.service - OpenSSH per-connection server daemon (139.178.68.195:59004). Aug 13 07:17:35.553389 sshd[1694]: Accepted publickey for core from 139.178.68.195 port 59004 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:17:35.555309 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:35.561990 systemd-logind[1441]: New session 7 of user core. Aug 13 07:17:35.571829 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:17:35.603214 systemd[1]: Started sshd@7-10.128.0.37:22-157.122.198.36:38758.service - OpenSSH per-connection server daemon (157.122.198.36:38758). Aug 13 07:17:35.746028 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:17:35.746573 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:17:35.763740 sudo[1699]: pam_unix(sudo:session): session closed for user root Aug 13 07:17:35.786678 sshd[1698]: Connection closed by 157.122.198.36 port 38758 Aug 13 07:17:35.787653 systemd[1]: sshd@7-10.128.0.37:22-157.122.198.36:38758.service: Deactivated successfully. Aug 13 07:17:35.806578 sshd[1694]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:35.812293 systemd[1]: sshd@6-10.128.0.37:22-139.178.68.195:59004.service: Deactivated successfully. Aug 13 07:17:35.814413 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:17:35.815760 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:17:35.817336 systemd-logind[1441]: Removed session 7. Aug 13 07:17:35.861973 systemd[1]: Started sshd@8-10.128.0.37:22-139.178.68.195:59014.service - OpenSSH per-connection server daemon (139.178.68.195:59014). Aug 13 07:17:36.164150 sshd[1706]: Accepted publickey for core from 139.178.68.195 port 59014 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:17:36.165656 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:36.171922 systemd-logind[1441]: New session 8 of user core. Aug 13 07:17:36.179792 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:17:36.343003 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:17:36.343541 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:17:36.348572 sudo[1710]: pam_unix(sudo:session): session closed for user root Aug 13 07:17:36.362189 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:17:36.362714 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:17:36.379937 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:17:36.383279 auditctl[1713]: No rules Aug 13 07:17:36.383829 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:17:36.384102 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:17:36.391113 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:17:36.424825 augenrules[1732]: No rules Aug 13 07:17:36.426149 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:17:36.428320 sudo[1709]: pam_unix(sudo:session): session closed for user root Aug 13 07:17:36.471985 sshd[1706]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:36.477340 systemd[1]: sshd@8-10.128.0.37:22-139.178.68.195:59014.service: Deactivated successfully. Aug 13 07:17:36.479632 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:17:36.480559 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:17:36.482055 systemd-logind[1441]: Removed session 8. Aug 13 07:17:36.529939 systemd[1]: Started sshd@9-10.128.0.37:22-139.178.68.195:59020.service - OpenSSH per-connection server daemon (139.178.68.195:59020). Aug 13 07:17:36.817210 sshd[1740]: Accepted publickey for core from 139.178.68.195 port 59020 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:17:36.818969 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:36.825349 systemd-logind[1441]: New session 9 of user core. Aug 13 07:17:36.830760 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:17:36.998771 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:17:36.999298 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:17:37.443909 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:17:37.456221 (dockerd)[1760]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:17:37.920778 dockerd[1760]: time="2025-08-13T07:17:37.920691115Z" level=info msg="Starting up" Aug 13 07:17:38.074907 dockerd[1760]: time="2025-08-13T07:17:38.074833733Z" level=info msg="Loading containers: start." Aug 13 07:17:38.233568 kernel: Initializing XFRM netlink socket Aug 13 07:17:38.352033 systemd-networkd[1377]: docker0: Link UP Aug 13 07:17:38.376670 dockerd[1760]: time="2025-08-13T07:17:38.376611751Z" level=info msg="Loading containers: done." Aug 13 07:17:38.403977 dockerd[1760]: time="2025-08-13T07:17:38.403228141Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:17:38.403977 dockerd[1760]: time="2025-08-13T07:17:38.403399572Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:17:38.403977 dockerd[1760]: time="2025-08-13T07:17:38.403588117Z" level=info msg="Daemon has completed initialization" Aug 13 07:17:38.403764 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2538626085-merged.mount: Deactivated successfully. Aug 13 07:17:38.447852 dockerd[1760]: time="2025-08-13T07:17:38.447757198Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:17:38.448198 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:17:39.395470 containerd[1462]: time="2025-08-13T07:17:39.395414481Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 07:17:39.953020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1054496472.mount: Deactivated successfully. Aug 13 07:17:41.892868 containerd[1462]: time="2025-08-13T07:17:41.892790654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:41.894431 containerd[1462]: time="2025-08-13T07:17:41.894369658Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28806622" Aug 13 07:17:41.895612 containerd[1462]: time="2025-08-13T07:17:41.895537700Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:41.904162 containerd[1462]: time="2025-08-13T07:17:41.903708912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:41.907322 containerd[1462]: time="2025-08-13T07:17:41.907250976Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 2.511781344s" Aug 13 07:17:41.907322 containerd[1462]: time="2025-08-13T07:17:41.907316573Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 07:17:41.908164 containerd[1462]: time="2025-08-13T07:17:41.908086374Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 07:17:43.345025 containerd[1462]: time="2025-08-13T07:17:43.344950330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:43.346778 containerd[1462]: time="2025-08-13T07:17:43.346706873Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24785570" Aug 13 07:17:43.348077 containerd[1462]: time="2025-08-13T07:17:43.347997908Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:43.351970 containerd[1462]: time="2025-08-13T07:17:43.351886805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:43.353505 containerd[1462]: time="2025-08-13T07:17:43.353279729Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.445138207s" Aug 13 07:17:43.353505 containerd[1462]: time="2025-08-13T07:17:43.353332581Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 07:17:43.354544 containerd[1462]: time="2025-08-13T07:17:43.354118764Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 07:17:45.040218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:17:45.049921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:45.386955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:45.397372 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:17:45.487466 kubelet[1969]: E0813 07:17:45.487017 1969 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:17:45.492455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:17:45.492769 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:17:45.645801 containerd[1462]: time="2025-08-13T07:17:45.645629132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:45.647978 containerd[1462]: time="2025-08-13T07:17:45.647888064Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19178837" Aug 13 07:17:45.649799 containerd[1462]: time="2025-08-13T07:17:45.649749335Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:45.656466 containerd[1462]: time="2025-08-13T07:17:45.654468347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:45.656466 containerd[1462]: time="2025-08-13T07:17:45.656279213Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 2.30197145s" Aug 13 07:17:45.656466 containerd[1462]: time="2025-08-13T07:17:45.656331189Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 07:17:45.657422 containerd[1462]: time="2025-08-13T07:17:45.657373087Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 07:17:46.855735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2040182780.mount: Deactivated successfully. Aug 13 07:17:47.098026 systemd[1]: Started sshd@10-10.128.0.37:22-183.253.125.205:40579.service - OpenSSH per-connection server daemon (183.253.125.205:40579). Aug 13 07:17:47.376549 sshd[1985]: Connection closed by 183.253.125.205 port 40579 Aug 13 07:17:47.378618 systemd[1]: sshd@10-10.128.0.37:22-183.253.125.205:40579.service: Deactivated successfully. Aug 13 07:17:47.531409 containerd[1462]: time="2025-08-13T07:17:47.531327964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:47.532837 containerd[1462]: time="2025-08-13T07:17:47.532766148Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30897275" Aug 13 07:17:47.534411 containerd[1462]: time="2025-08-13T07:17:47.534339949Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:47.537553 containerd[1462]: time="2025-08-13T07:17:47.537477212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:47.538666 containerd[1462]: time="2025-08-13T07:17:47.538455370Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 1.880842987s" Aug 13 07:17:47.538666 containerd[1462]: time="2025-08-13T07:17:47.538505292Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 07:17:47.539561 containerd[1462]: time="2025-08-13T07:17:47.539319760Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:17:47.948600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3274940828.mount: Deactivated successfully. Aug 13 07:17:49.435576 containerd[1462]: time="2025-08-13T07:17:49.435478738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:49.437221 containerd[1462]: time="2025-08-13T07:17:49.437148718Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Aug 13 07:17:49.438294 containerd[1462]: time="2025-08-13T07:17:49.438205348Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:49.444970 containerd[1462]: time="2025-08-13T07:17:49.444843184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:49.446697 containerd[1462]: time="2025-08-13T07:17:49.446465978Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.90709979s" Aug 13 07:17:49.446697 containerd[1462]: time="2025-08-13T07:17:49.446542822Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:17:49.447722 containerd[1462]: time="2025-08-13T07:17:49.447454484Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:17:49.903664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1695080015.mount: Deactivated successfully. Aug 13 07:17:49.997836 containerd[1462]: time="2025-08-13T07:17:49.997699262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:49.999230 containerd[1462]: time="2025-08-13T07:17:49.999148168Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Aug 13 07:17:50.001138 containerd[1462]: time="2025-08-13T07:17:50.000990940Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:50.004984 containerd[1462]: time="2025-08-13T07:17:50.004874690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:50.007133 containerd[1462]: time="2025-08-13T07:17:50.006328335Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 558.827366ms" Aug 13 07:17:50.007133 containerd[1462]: time="2025-08-13T07:17:50.006397552Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:17:50.007420 containerd[1462]: time="2025-08-13T07:17:50.007385043Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 07:17:50.469633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3366524127.mount: Deactivated successfully. Aug 13 07:17:51.035147 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 07:17:51.449020 systemd[1]: Started sshd@11-10.128.0.37:22-218.25.233.22:50719.service - OpenSSH per-connection server daemon (218.25.233.22:50719). Aug 13 07:17:52.488382 sshd[2101]: Connection closed by 218.25.233.22 port 50719 Aug 13 07:17:52.490962 systemd[1]: sshd@11-10.128.0.37:22-218.25.233.22:50719.service: Deactivated successfully. Aug 13 07:17:52.679064 containerd[1462]: time="2025-08-13T07:17:52.678979567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:52.681139 containerd[1462]: time="2025-08-13T07:17:52.681071995Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57557924" Aug 13 07:17:52.682283 containerd[1462]: time="2025-08-13T07:17:52.682193068Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:52.687692 containerd[1462]: time="2025-08-13T07:17:52.687597868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:52.690500 containerd[1462]: time="2025-08-13T07:17:52.689720685Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.68228299s" Aug 13 07:17:52.690500 containerd[1462]: time="2025-08-13T07:17:52.689775402Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 07:17:55.538619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 07:17:55.547609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:55.916186 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:17:55.916500 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:17:55.917009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:55.925081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:55.983497 systemd[1]: Reloading requested from client PID 2138 ('systemctl') (unit session-9.scope)... Aug 13 07:17:55.983544 systemd[1]: Reloading... Aug 13 07:17:56.169567 zram_generator::config[2181]: No configuration found. Aug 13 07:17:56.344921 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:17:56.463576 systemd[1]: Reloading finished in 479 ms. Aug 13 07:17:56.537290 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:17:56.537455 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:17:56.537924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:56.548620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:57.533846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:57.547339 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:17:57.616468 kubelet[2226]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:17:57.616468 kubelet[2226]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:17:57.616468 kubelet[2226]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:17:57.617228 kubelet[2226]: I0813 07:17:57.616597 2226 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:17:57.945542 kubelet[2226]: I0813 07:17:57.945462 2226 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 07:17:57.945542 kubelet[2226]: I0813 07:17:57.945501 2226 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:17:57.946062 kubelet[2226]: I0813 07:17:57.946020 2226 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 07:17:57.991394 kubelet[2226]: E0813 07:17:57.991304 2226 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.37:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:57.993633 kubelet[2226]: I0813 07:17:57.993003 2226 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:17:58.008616 kubelet[2226]: E0813 07:17:58.008566 2226 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:17:58.008616 kubelet[2226]: I0813 07:17:58.008615 2226 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:17:58.012529 kubelet[2226]: I0813 07:17:58.012495 2226 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:17:58.012890 kubelet[2226]: I0813 07:17:58.012834 2226 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:17:58.013152 kubelet[2226]: I0813 07:17:58.012876 2226 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:17:58.014880 kubelet[2226]: I0813 07:17:58.014842 2226 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:17:58.014880 kubelet[2226]: I0813 07:17:58.014877 2226 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 07:17:58.015087 kubelet[2226]: I0813 07:17:58.015053 2226 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:17:58.020430 kubelet[2226]: I0813 07:17:58.020306 2226 kubelet.go:446] "Attempting to sync node with API server" Aug 13 07:17:58.020430 kubelet[2226]: I0813 07:17:58.020353 2226 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:17:58.020430 kubelet[2226]: I0813 07:17:58.020388 2226 kubelet.go:352] "Adding apiserver pod source" Aug 13 07:17:58.020430 kubelet[2226]: I0813 07:17:58.020405 2226 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:17:58.030611 kubelet[2226]: W0813 07:17:58.029572 2226 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.37:6443: connect: connection refused Aug 13 07:17:58.030611 kubelet[2226]: E0813 07:17:58.029657 2226 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.37:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:58.030611 kubelet[2226]: W0813 07:17:58.030201 2226 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.37:6443: connect: connection refused Aug 13 07:17:58.030611 kubelet[2226]: E0813 07:17:58.030262 2226 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.37:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:58.030977 kubelet[2226]: I0813 07:17:58.030892 2226 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:17:58.031431 kubelet[2226]: I0813 07:17:58.031382 2226 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:17:58.031546 kubelet[2226]: W0813 07:17:58.031472 2226 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:17:58.035146 kubelet[2226]: I0813 07:17:58.035099 2226 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:17:58.035251 kubelet[2226]: I0813 07:17:58.035153 2226 server.go:1287] "Started kubelet" Aug 13 07:17:58.037533 kubelet[2226]: I0813 07:17:58.037337 2226 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:17:58.038657 kubelet[2226]: I0813 07:17:58.038612 2226 server.go:479] "Adding debug handlers to kubelet server" Aug 13 07:17:58.043481 kubelet[2226]: I0813 07:17:58.043429 2226 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:17:58.047551 kubelet[2226]: I0813 07:17:58.046790 2226 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:17:58.047551 kubelet[2226]: I0813 07:17:58.047124 2226 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:17:58.050700 kubelet[2226]: E0813 07:17:58.047531 2226 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.37:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal.185b42643a69e770 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal,UID:ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal,},FirstTimestamp:2025-08-13 07:17:58.035122032 +0000 UTC m=+0.480973315,LastTimestamp:2025-08-13 07:17:58.035122032 +0000 UTC m=+0.480973315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal,}" Aug 13 07:17:58.053439 kubelet[2226]: I0813 07:17:58.053412 2226 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:17:58.054259 kubelet[2226]: E0813 07:17:58.054220 2226 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" not found" Aug 13 07:17:58.056008 kubelet[2226]: I0813 07:17:58.055966 2226 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:17:58.057692 kubelet[2226]: E0813 07:17:58.057648 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.37:6443: connect: connection refused" interval="200ms" Aug 13 07:17:58.058565 kubelet[2226]: I0813 07:17:58.058540 2226 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:17:58.058665 kubelet[2226]: I0813 07:17:58.058615 2226 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:17:58.060471 kubelet[2226]: W0813 07:17:58.060110 2226 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.37:6443: connect: connection refused Aug 13 07:17:58.060471 kubelet[2226]: E0813 07:17:58.060225 2226 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.37:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:58.060667 kubelet[2226]: I0813 07:17:58.060540 2226 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:17:58.060725 kubelet[2226]: I0813 07:17:58.060668 2226 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:17:58.063542 kubelet[2226]: I0813 07:17:58.063227 2226 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:17:58.083576 kubelet[2226]: I0813 07:17:58.083490 2226 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:17:58.090459 kubelet[2226]: I0813 07:17:58.090405 2226 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:17:58.090459 kubelet[2226]: I0813 07:17:58.090440 2226 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 07:17:58.090459 kubelet[2226]: I0813 07:17:58.090469 2226 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:17:58.090719 kubelet[2226]: I0813 07:17:58.090482 2226 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 07:17:58.090719 kubelet[2226]: E0813 07:17:58.090568 2226 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:17:58.095796 kubelet[2226]: W0813 07:17:58.095757 2226 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.37:6443: connect: connection refused Aug 13 07:17:58.096218 kubelet[2226]: E0813 07:17:58.096000 2226 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.37:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:58.097082 kubelet[2226]: I0813 07:17:58.096488 2226 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:17:58.097583 kubelet[2226]: I0813 07:17:58.097202 2226 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:17:58.097583 kubelet[2226]: I0813 07:17:58.097234 2226 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:17:58.099622 kubelet[2226]: I0813 07:17:58.099589 2226 policy_none.go:49] "None policy: Start" Aug 13 07:17:58.099715 kubelet[2226]: I0813 07:17:58.099626 2226 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:17:58.099715 kubelet[2226]: I0813 07:17:58.099647 2226 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:17:58.106874 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:17:58.125568 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:17:58.129890 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:17:58.140891 kubelet[2226]: I0813 07:17:58.140853 2226 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:17:58.141394 kubelet[2226]: I0813 07:17:58.141370 2226 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:17:58.141591 kubelet[2226]: I0813 07:17:58.141544 2226 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:17:58.142036 kubelet[2226]: I0813 07:17:58.142014 2226 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:17:58.144437 kubelet[2226]: E0813 07:17:58.144410 2226 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:17:58.144588 kubelet[2226]: E0813 07:17:58.144471 2226 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" not found" Aug 13 07:17:58.213129 systemd[1]: Created slice kubepods-burstable-podf67b318e8046607042f09129c2564267.slice - libcontainer container kubepods-burstable-podf67b318e8046607042f09129c2564267.slice. Aug 13 07:17:58.224975 kubelet[2226]: E0813 07:17:58.224891 2226 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.230577 systemd[1]: Created slice kubepods-burstable-pod256e0d08f6eabdc2f6a3277604e6378b.slice - libcontainer container kubepods-burstable-pod256e0d08f6eabdc2f6a3277604e6378b.slice. Aug 13 07:17:58.234083 kubelet[2226]: E0813 07:17:58.234043 2226 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.237066 systemd[1]: Created slice kubepods-burstable-pod7aaf3489e5db9f851fb3558b54f60fde.slice - libcontainer container kubepods-burstable-pod7aaf3489e5db9f851fb3558b54f60fde.slice. Aug 13 07:17:58.240214 kubelet[2226]: E0813 07:17:58.240168 2226 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.248689 kubelet[2226]: I0813 07:17:58.248624 2226 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.249332 kubelet[2226]: E0813 07:17:58.249244 2226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.37:6443/api/v1/nodes\": dial tcp 10.128.0.37:6443: connect: connection refused" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.259385 kubelet[2226]: E0813 07:17:58.259285 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.37:6443: connect: connection refused" interval="400ms" Aug 13 07:17:58.259616 kubelet[2226]: I0813 07:17:58.259430 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f67b318e8046607042f09129c2564267-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"f67b318e8046607042f09129c2564267\") " pod="kube-system/kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.259616 kubelet[2226]: I0813 07:17:58.259496 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/256e0d08f6eabdc2f6a3277604e6378b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"256e0d08f6eabdc2f6a3277604e6378b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.259616 kubelet[2226]: I0813 07:17:58.259583 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/256e0d08f6eabdc2f6a3277604e6378b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"256e0d08f6eabdc2f6a3277604e6378b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.259815 kubelet[2226]: I0813 07:17:58.259618 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/256e0d08f6eabdc2f6a3277604e6378b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"256e0d08f6eabdc2f6a3277604e6378b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.259815 kubelet[2226]: I0813 07:17:58.259651 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/256e0d08f6eabdc2f6a3277604e6378b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"256e0d08f6eabdc2f6a3277604e6378b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.259815 kubelet[2226]: I0813 07:17:58.259686 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f67b318e8046607042f09129c2564267-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"f67b318e8046607042f09129c2564267\") " pod="kube-system/kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.259815 kubelet[2226]: I0813 07:17:58.259717 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f67b318e8046607042f09129c2564267-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"f67b318e8046607042f09129c2564267\") " pod="kube-system/kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.260018 kubelet[2226]: I0813 07:17:58.259765 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/256e0d08f6eabdc2f6a3277604e6378b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"256e0d08f6eabdc2f6a3277604e6378b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.260018 kubelet[2226]: I0813 07:17:58.259801 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7aaf3489e5db9f851fb3558b54f60fde-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"7aaf3489e5db9f851fb3558b54f60fde\") " pod="kube-system/kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.455144 kubelet[2226]: I0813 07:17:58.455101 2226 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.455591 kubelet[2226]: E0813 07:17:58.455550 2226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.37:6443/api/v1/nodes\": dial tcp 10.128.0.37:6443: connect: connection refused" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.527036 containerd[1462]: time="2025-08-13T07:17:58.526851849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal,Uid:f67b318e8046607042f09129c2564267,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:58.536258 containerd[1462]: time="2025-08-13T07:17:58.536183218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal,Uid:256e0d08f6eabdc2f6a3277604e6378b,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:58.542459 containerd[1462]: time="2025-08-13T07:17:58.542050634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal,Uid:7aaf3489e5db9f851fb3558b54f60fde,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:58.660287 kubelet[2226]: E0813 07:17:58.660230 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.37:6443: connect: connection refused" interval="800ms" Aug 13 07:17:58.862747 kubelet[2226]: I0813 07:17:58.862697 2226 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.863213 kubelet[2226]: E0813 07:17:58.863161 2226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.37:6443/api/v1/nodes\": dial tcp 10.128.0.37:6443: connect: connection refused" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:17:58.905551 kubelet[2226]: W0813 07:17:58.904953 2226 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.37:6443: connect: connection refused Aug 13 07:17:58.907546 kubelet[2226]: E0813 07:17:58.906317 2226 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.37:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:58.913863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2105796281.mount: Deactivated successfully. Aug 13 07:17:58.923600 containerd[1462]: time="2025-08-13T07:17:58.923484652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:17:58.925058 containerd[1462]: time="2025-08-13T07:17:58.924997142Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:17:58.926395 containerd[1462]: time="2025-08-13T07:17:58.926330712Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Aug 13 07:17:58.927795 containerd[1462]: time="2025-08-13T07:17:58.927733528Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:17:58.929622 containerd[1462]: time="2025-08-13T07:17:58.929558462Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:17:58.929741 containerd[1462]: time="2025-08-13T07:17:58.929670959Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:17:58.930801 containerd[1462]: time="2025-08-13T07:17:58.930677529Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:17:58.933955 containerd[1462]: time="2025-08-13T07:17:58.933913300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:17:58.937161 containerd[1462]: time="2025-08-13T07:17:58.936468799Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 400.184937ms" Aug 13 07:17:58.939046 containerd[1462]: time="2025-08-13T07:17:58.938991104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 396.848679ms" Aug 13 07:17:58.939878 containerd[1462]: time="2025-08-13T07:17:58.939828755Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 412.874786ms" Aug 13 07:17:59.133680 containerd[1462]: time="2025-08-13T07:17:59.132924306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:59.133680 containerd[1462]: time="2025-08-13T07:17:59.133012979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:59.133680 containerd[1462]: time="2025-08-13T07:17:59.133042513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:59.138546 containerd[1462]: time="2025-08-13T07:17:59.136325516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:59.138887 containerd[1462]: time="2025-08-13T07:17:59.138411577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:59.138887 containerd[1462]: time="2025-08-13T07:17:59.138495750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:59.138887 containerd[1462]: time="2025-08-13T07:17:59.138547407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:59.138887 containerd[1462]: time="2025-08-13T07:17:59.138680755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:59.145305 containerd[1462]: time="2025-08-13T07:17:59.144921935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:59.145305 containerd[1462]: time="2025-08-13T07:17:59.145004671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:59.145305 containerd[1462]: time="2025-08-13T07:17:59.145032303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:59.145305 containerd[1462]: time="2025-08-13T07:17:59.145175967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:59.197769 systemd[1]: Started cri-containerd-33e7ab69487a8e97c35c7edfb0ea129188e6584f6587301edde5605051023eb4.scope - libcontainer container 33e7ab69487a8e97c35c7edfb0ea129188e6584f6587301edde5605051023eb4. Aug 13 07:17:59.200376 systemd[1]: Started cri-containerd-bd9c152a33a6321c11f4b7eaab7b49eae3a3bbb12ecf768101e0301be44f91b3.scope - libcontainer container bd9c152a33a6321c11f4b7eaab7b49eae3a3bbb12ecf768101e0301be44f91b3. Aug 13 07:17:59.202800 systemd[1]: Started cri-containerd-eb6d9c2643375d932c775f397abc567b57e485cd3450f314f021dad2c290642d.scope - libcontainer container eb6d9c2643375d932c775f397abc567b57e485cd3450f314f021dad2c290642d. Aug 13 07:17:59.307604 containerd[1462]: time="2025-08-13T07:17:59.306042657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal,Uid:f67b318e8046607042f09129c2564267,Namespace:kube-system,Attempt:0,} returns sandbox id \"33e7ab69487a8e97c35c7edfb0ea129188e6584f6587301edde5605051023eb4\"" Aug 13 07:17:59.309009 kubelet[2226]: E0813 07:17:59.308964 2226 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-21291" Aug 13 07:17:59.312137 containerd[1462]: time="2025-08-13T07:17:59.311897897Z" level=info msg="CreateContainer within sandbox \"33e7ab69487a8e97c35c7edfb0ea129188e6584f6587301edde5605051023eb4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:17:59.322302 containerd[1462]: time="2025-08-13T07:17:59.321936203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal,Uid:256e0d08f6eabdc2f6a3277604e6378b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd9c152a33a6321c11f4b7eaab7b49eae3a3bbb12ecf768101e0301be44f91b3\"" Aug 13 07:17:59.324454 kubelet[2226]: E0813 07:17:59.324105 2226 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flat" Aug 13 07:17:59.326365 containerd[1462]: time="2025-08-13T07:17:59.326327000Z" level=info msg="CreateContainer within sandbox \"bd9c152a33a6321c11f4b7eaab7b49eae3a3bbb12ecf768101e0301be44f91b3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:17:59.342884 containerd[1462]: time="2025-08-13T07:17:59.342788168Z" level=info msg="CreateContainer within sandbox \"33e7ab69487a8e97c35c7edfb0ea129188e6584f6587301edde5605051023eb4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6f668e65562c9ab2a861bdbb3a2ccadf2f246ad280bcb7b25b68ab8aea0bb0ee\"" Aug 13 07:17:59.344015 containerd[1462]: time="2025-08-13T07:17:59.343759239Z" level=info msg="StartContainer for \"6f668e65562c9ab2a861bdbb3a2ccadf2f246ad280bcb7b25b68ab8aea0bb0ee\"" Aug 13 07:17:59.347911 containerd[1462]: time="2025-08-13T07:17:59.347836535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal,Uid:7aaf3489e5db9f851fb3558b54f60fde,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb6d9c2643375d932c775f397abc567b57e485cd3450f314f021dad2c290642d\"" Aug 13 07:17:59.351357 kubelet[2226]: E0813 07:17:59.351231 2226 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-21291" Aug 13 07:17:59.353780 containerd[1462]: time="2025-08-13T07:17:59.353736357Z" level=info msg="CreateContainer within sandbox \"eb6d9c2643375d932c775f397abc567b57e485cd3450f314f021dad2c290642d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:17:59.361829 containerd[1462]: time="2025-08-13T07:17:59.361768927Z" level=info msg="CreateContainer within sandbox \"bd9c152a33a6321c11f4b7eaab7b49eae3a3bbb12ecf768101e0301be44f91b3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0ffbb5c07a80aefc02e096b2d19587a820bdc5d328256a9ac8157b3b32ef32f1\"" Aug 13 07:17:59.364212 containerd[1462]: time="2025-08-13T07:17:59.364154171Z" level=info msg="StartContainer for \"0ffbb5c07a80aefc02e096b2d19587a820bdc5d328256a9ac8157b3b32ef32f1\"" Aug 13 07:17:59.379639 containerd[1462]: time="2025-08-13T07:17:59.378580860Z" level=info msg="CreateContainer within sandbox \"eb6d9c2643375d932c775f397abc567b57e485cd3450f314f021dad2c290642d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"040c3a5ef40a408703037ebbfcc5505f35f28623ab34c442bda056191711844f\"" Aug 13 07:17:59.380887 containerd[1462]: time="2025-08-13T07:17:59.380854780Z" level=info msg="StartContainer for \"040c3a5ef40a408703037ebbfcc5505f35f28623ab34c442bda056191711844f\"" Aug 13 07:17:59.403786 systemd[1]: Started cri-containerd-6f668e65562c9ab2a861bdbb3a2ccadf2f246ad280bcb7b25b68ab8aea0bb0ee.scope - libcontainer container 6f668e65562c9ab2a861bdbb3a2ccadf2f246ad280bcb7b25b68ab8aea0bb0ee. Aug 13 07:17:59.428942 systemd[1]: Started cri-containerd-0ffbb5c07a80aefc02e096b2d19587a820bdc5d328256a9ac8157b3b32ef32f1.scope - libcontainer container 0ffbb5c07a80aefc02e096b2d19587a820bdc5d328256a9ac8157b3b32ef32f1. Aug 13 07:17:59.460748 systemd[1]: Started cri-containerd-040c3a5ef40a408703037ebbfcc5505f35f28623ab34c442bda056191711844f.scope - libcontainer container 040c3a5ef40a408703037ebbfcc5505f35f28623ab34c442bda056191711844f. Aug 13 07:17:59.462090 kubelet[2226]: E0813 07:17:59.462012 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.37:6443: connect: connection refused" interval="1.6s" Aug 13 07:17:59.464530 kubelet[2226]: W0813 07:17:59.464473 2226 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.37:6443: connect: connection refused Aug 13 07:17:59.464651 kubelet[2226]: E0813 07:17:59.464548 2226 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.37:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:59.498102 kubelet[2226]: W0813 07:17:59.497436 2226 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.37:6443: connect: connection refused Aug 13 07:17:59.498102 kubelet[2226]: E0813 07:17:59.497770 2226 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.37:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:59.535440 containerd[1462]: time="2025-08-13T07:17:59.535300908Z" level=info msg="StartContainer for \"6f668e65562c9ab2a861bdbb3a2ccadf2f246ad280bcb7b25b68ab8aea0bb0ee\" returns successfully" Aug 13 07:17:59.570067 containerd[1462]: time="2025-08-13T07:17:59.566345570Z" level=info msg="StartContainer for \"0ffbb5c07a80aefc02e096b2d19587a820bdc5d328256a9ac8157b3b32ef32f1\" returns successfully" Aug 13 07:17:59.590259 kubelet[2226]: W0813 07:17:59.590176 2226 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.37:6443: connect: connection refused Aug 13 07:17:59.590431 kubelet[2226]: E0813 07:17:59.590276 2226 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.37:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:59.612312 containerd[1462]: time="2025-08-13T07:17:59.612260408Z" level=info msg="StartContainer for \"040c3a5ef40a408703037ebbfcc5505f35f28623ab34c442bda056191711844f\" returns successfully" Aug 13 07:17:59.672638 kubelet[2226]: I0813 07:17:59.671573 2226 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:00.129542 kubelet[2226]: E0813 07:18:00.128222 2226 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:00.129542 kubelet[2226]: E0813 07:18:00.128817 2226 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:00.131271 kubelet[2226]: E0813 07:18:00.131244 2226 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:01.135669 kubelet[2226]: E0813 07:18:01.133351 2226 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:01.137541 kubelet[2226]: E0813 07:18:01.136834 2226 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:01.137541 kubelet[2226]: E0813 07:18:01.137319 2226 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:02.138782 kubelet[2226]: E0813 07:18:02.138737 2226 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:02.744740 kubelet[2226]: E0813 07:18:02.744688 2226 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:02.817422 kubelet[2226]: I0813 07:18:02.817365 2226 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:02.855097 kubelet[2226]: I0813 07:18:02.855036 2226 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:02.920672 kubelet[2226]: E0813 07:18:02.920620 2226 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:02.920672 kubelet[2226]: I0813 07:18:02.920669 2226 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:02.940027 kubelet[2226]: E0813 07:18:02.939972 2226 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:02.940223 kubelet[2226]: I0813 07:18:02.940044 2226 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:02.947468 kubelet[2226]: E0813 07:18:02.947391 2226 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:03.034560 kubelet[2226]: I0813 07:18:03.033844 2226 apiserver.go:52] "Watching apiserver" Aug 13 07:18:03.058832 kubelet[2226]: I0813 07:18:03.058777 2226 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:18:05.032241 systemd[1]: Reloading requested from client PID 2497 ('systemctl') (unit session-9.scope)... Aug 13 07:18:05.032322 systemd[1]: Reloading... Aug 13 07:18:05.160605 zram_generator::config[2533]: No configuration found. Aug 13 07:18:05.329809 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:18:05.452674 systemd[1]: Reloading finished in 419 ms. Aug 13 07:18:05.507197 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:18:05.524563 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:18:05.524926 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:18:05.525015 systemd[1]: kubelet.service: Consumed 1.026s CPU time, 133.0M memory peak, 0B memory swap peak. Aug 13 07:18:05.530034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:18:05.688483 update_engine[1445]: I20250813 07:18:05.687288 1445 update_attempter.cc:509] Updating boot flags... Aug 13 07:18:05.777552 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2589) Aug 13 07:18:05.907989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:18:05.943222 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2588) Aug 13 07:18:05.950252 (kubelet)[2602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:18:06.101295 kubelet[2602]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:18:06.101770 kubelet[2602]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:18:06.101770 kubelet[2602]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:18:06.102000 kubelet[2602]: I0813 07:18:06.101929 2602 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:18:06.112284 kubelet[2602]: I0813 07:18:06.111661 2602 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 07:18:06.112284 kubelet[2602]: I0813 07:18:06.111696 2602 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:18:06.112284 kubelet[2602]: I0813 07:18:06.112073 2602 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 07:18:06.119302 kubelet[2602]: I0813 07:18:06.119264 2602 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:18:06.123098 kubelet[2602]: I0813 07:18:06.123059 2602 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:18:06.129156 kubelet[2602]: E0813 07:18:06.128125 2602 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:18:06.129156 kubelet[2602]: I0813 07:18:06.128162 2602 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:18:06.132110 kubelet[2602]: I0813 07:18:06.132062 2602 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:18:06.132461 kubelet[2602]: I0813 07:18:06.132389 2602 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:18:06.132734 kubelet[2602]: I0813 07:18:06.132447 2602 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:18:06.132912 kubelet[2602]: I0813 07:18:06.132740 2602 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:18:06.132912 kubelet[2602]: I0813 07:18:06.132759 2602 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 07:18:06.132912 kubelet[2602]: I0813 07:18:06.132831 2602 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:18:06.133097 kubelet[2602]: I0813 07:18:06.133052 2602 kubelet.go:446] "Attempting to sync node with API server" Aug 13 07:18:06.133097 kubelet[2602]: I0813 07:18:06.133085 2602 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:18:06.133934 kubelet[2602]: I0813 07:18:06.133113 2602 kubelet.go:352] "Adding apiserver pod source" Aug 13 07:18:06.133934 kubelet[2602]: I0813 07:18:06.133130 2602 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:18:06.134611 kubelet[2602]: I0813 07:18:06.134591 2602 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:18:06.135388 kubelet[2602]: I0813 07:18:06.135365 2602 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:18:06.136159 kubelet[2602]: I0813 07:18:06.136136 2602 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:18:06.138537 kubelet[2602]: I0813 07:18:06.136318 2602 server.go:1287] "Started kubelet" Aug 13 07:18:06.143494 kubelet[2602]: I0813 07:18:06.143464 2602 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:18:06.148551 kubelet[2602]: I0813 07:18:06.148478 2602 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:18:06.151924 kubelet[2602]: I0813 07:18:06.151889 2602 server.go:479] "Adding debug handlers to kubelet server" Aug 13 07:18:06.153712 kubelet[2602]: I0813 07:18:06.153682 2602 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:18:06.155836 kubelet[2602]: I0813 07:18:06.155752 2602 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:18:06.156962 kubelet[2602]: I0813 07:18:06.156103 2602 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:18:06.158747 kubelet[2602]: I0813 07:18:06.158697 2602 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:18:06.158900 kubelet[2602]: E0813 07:18:06.158874 2602 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" not found" Aug 13 07:18:06.163151 kubelet[2602]: I0813 07:18:06.159837 2602 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:18:06.163151 kubelet[2602]: I0813 07:18:06.160019 2602 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:18:06.188690 kubelet[2602]: I0813 07:18:06.188603 2602 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:18:06.188862 kubelet[2602]: I0813 07:18:06.188771 2602 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:18:06.205793 kubelet[2602]: I0813 07:18:06.202666 2602 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:18:06.205793 kubelet[2602]: E0813 07:18:06.203954 2602 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:18:06.208009 kubelet[2602]: I0813 07:18:06.207970 2602 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:18:06.210797 kubelet[2602]: I0813 07:18:06.210109 2602 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:18:06.210797 kubelet[2602]: I0813 07:18:06.210150 2602 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 07:18:06.210797 kubelet[2602]: I0813 07:18:06.210179 2602 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:18:06.210797 kubelet[2602]: I0813 07:18:06.210191 2602 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 07:18:06.210797 kubelet[2602]: E0813 07:18:06.210265 2602 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:18:06.305752 kubelet[2602]: I0813 07:18:06.303967 2602 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:18:06.305752 kubelet[2602]: I0813 07:18:06.303989 2602 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:18:06.305752 kubelet[2602]: I0813 07:18:06.304015 2602 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:18:06.305752 kubelet[2602]: I0813 07:18:06.304232 2602 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:18:06.305752 kubelet[2602]: I0813 07:18:06.304249 2602 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:18:06.305752 kubelet[2602]: I0813 07:18:06.304277 2602 policy_none.go:49] "None policy: Start" Aug 13 07:18:06.305752 kubelet[2602]: I0813 07:18:06.304292 2602 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:18:06.305752 kubelet[2602]: I0813 07:18:06.304307 2602 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:18:06.305752 kubelet[2602]: I0813 07:18:06.304546 2602 state_mem.go:75] "Updated machine memory state" Aug 13 07:18:06.310900 kubelet[2602]: E0813 07:18:06.310829 2602 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:18:06.313601 kubelet[2602]: I0813 07:18:06.312777 2602 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:18:06.313601 kubelet[2602]: I0813 07:18:06.312993 2602 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:18:06.313601 kubelet[2602]: I0813 07:18:06.313009 2602 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:18:06.313601 kubelet[2602]: I0813 07:18:06.313445 2602 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:18:06.317253 kubelet[2602]: E0813 07:18:06.317226 2602 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:18:06.443184 kubelet[2602]: I0813 07:18:06.442601 2602 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:06.463893 kubelet[2602]: I0813 07:18:06.463758 2602 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:06.463893 kubelet[2602]: I0813 07:18:06.463870 2602 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:06.512065 kubelet[2602]: I0813 07:18:06.511926 2602 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:06.512266 kubelet[2602]: I0813 07:18:06.512094 2602 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:06.514478 kubelet[2602]: I0813 07:18:06.511952 2602 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:06.524105 kubelet[2602]: W0813 07:18:06.523813 2602 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 07:18:06.526289 kubelet[2602]: W0813 07:18:06.526164 2602 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 07:18:06.527100 kubelet[2602]: W0813 07:18:06.526761 2602 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 07:18:06.564608 kubelet[2602]: I0813 07:18:06.564120 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f67b318e8046607042f09129c2564267-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"f67b318e8046607042f09129c2564267\") " pod="kube-system/kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:06.564608 kubelet[2602]: I0813 07:18:06.564193 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/256e0d08f6eabdc2f6a3277604e6378b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"256e0d08f6eabdc2f6a3277604e6378b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:06.564608 kubelet[2602]: I0813 07:18:06.564234 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/256e0d08f6eabdc2f6a3277604e6378b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"256e0d08f6eabdc2f6a3277604e6378b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:06.564608 kubelet[2602]: I0813 07:18:06.564274 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7aaf3489e5db9f851fb3558b54f60fde-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"7aaf3489e5db9f851fb3558b54f60fde\") " pod="kube-system/kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:06.564970 kubelet[2602]: I0813 07:18:06.564307 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f67b318e8046607042f09129c2564267-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"f67b318e8046607042f09129c2564267\") " pod="kube-system/kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:06.564970 kubelet[2602]: I0813 07:18:06.564341 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/256e0d08f6eabdc2f6a3277604e6378b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"256e0d08f6eabdc2f6a3277604e6378b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:06.564970 kubelet[2602]: I0813 07:18:06.564378 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/256e0d08f6eabdc2f6a3277604e6378b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"256e0d08f6eabdc2f6a3277604e6378b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:06.564970 kubelet[2602]: I0813 07:18:06.564411 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/256e0d08f6eabdc2f6a3277604e6378b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"256e0d08f6eabdc2f6a3277604e6378b\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:06.565100 kubelet[2602]: I0813 07:18:06.564442 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f67b318e8046607042f09129c2564267-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" (UID: \"f67b318e8046607042f09129c2564267\") " pod="kube-system/kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:07.158115 kubelet[2602]: I0813 07:18:07.156433 2602 apiserver.go:52] "Watching apiserver" Aug 13 07:18:07.260525 kubelet[2602]: I0813 07:18:07.260460 2602 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:18:07.268229 kubelet[2602]: I0813 07:18:07.268191 2602 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:07.269636 kubelet[2602]: I0813 07:18:07.269603 2602 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:07.279335 kubelet[2602]: W0813 07:18:07.279158 2602 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 07:18:07.279335 kubelet[2602]: E0813 07:18:07.279297 2602 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:07.282658 kubelet[2602]: W0813 07:18:07.281741 2602 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 07:18:07.282658 kubelet[2602]: E0813 07:18:07.281809 2602 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:07.319175 kubelet[2602]: I0813 07:18:07.318499 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" podStartSLOduration=1.318473716 podStartE2EDuration="1.318473716s" podCreationTimestamp="2025-08-13 07:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:18:07.306006572 +0000 UTC m=+1.335688902" watchObservedRunningTime="2025-08-13 07:18:07.318473716 +0000 UTC m=+1.348156041" Aug 13 07:18:07.335224 kubelet[2602]: I0813 07:18:07.335149 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" podStartSLOduration=1.335121996 podStartE2EDuration="1.335121996s" podCreationTimestamp="2025-08-13 07:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:18:07.318934672 +0000 UTC m=+1.348617001" watchObservedRunningTime="2025-08-13 07:18:07.335121996 +0000 UTC m=+1.364804323" Aug 13 07:18:07.350404 kubelet[2602]: I0813 07:18:07.350313 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" podStartSLOduration=1.350286319 podStartE2EDuration="1.350286319s" podCreationTimestamp="2025-08-13 07:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:18:07.33636415 +0000 UTC m=+1.366046479" watchObservedRunningTime="2025-08-13 07:18:07.350286319 +0000 UTC m=+1.379968715" Aug 13 07:18:11.017182 kubelet[2602]: I0813 07:18:11.017123 2602 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:18:11.017818 containerd[1462]: time="2025-08-13T07:18:11.017752769Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:18:11.018244 kubelet[2602]: I0813 07:18:11.018153 2602 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:18:11.638145 systemd[1]: Created slice kubepods-besteffort-pod4a146f66_d797_41ab_a051_8def24c76aa4.slice - libcontainer container kubepods-besteffort-pod4a146f66_d797_41ab_a051_8def24c76aa4.slice. Aug 13 07:18:11.696619 kubelet[2602]: I0813 07:18:11.696557 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a146f66-d797-41ab-a051-8def24c76aa4-kube-proxy\") pod \"kube-proxy-h7qx4\" (UID: \"4a146f66-d797-41ab-a051-8def24c76aa4\") " pod="kube-system/kube-proxy-h7qx4" Aug 13 07:18:11.696830 kubelet[2602]: I0813 07:18:11.696632 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfsxh\" (UniqueName: \"kubernetes.io/projected/4a146f66-d797-41ab-a051-8def24c76aa4-kube-api-access-hfsxh\") pod \"kube-proxy-h7qx4\" (UID: \"4a146f66-d797-41ab-a051-8def24c76aa4\") " pod="kube-system/kube-proxy-h7qx4" Aug 13 07:18:11.696830 kubelet[2602]: I0813 07:18:11.696669 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a146f66-d797-41ab-a051-8def24c76aa4-xtables-lock\") pod \"kube-proxy-h7qx4\" (UID: \"4a146f66-d797-41ab-a051-8def24c76aa4\") " pod="kube-system/kube-proxy-h7qx4" Aug 13 07:18:11.696830 kubelet[2602]: I0813 07:18:11.696692 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a146f66-d797-41ab-a051-8def24c76aa4-lib-modules\") pod \"kube-proxy-h7qx4\" (UID: \"4a146f66-d797-41ab-a051-8def24c76aa4\") " pod="kube-system/kube-proxy-h7qx4" Aug 13 07:18:11.808771 kubelet[2602]: E0813 07:18:11.808296 2602 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 07:18:11.808771 kubelet[2602]: E0813 07:18:11.808348 2602 projected.go:194] Error preparing data for projected volume kube-api-access-hfsxh for pod kube-system/kube-proxy-h7qx4: configmap "kube-root-ca.crt" not found Aug 13 07:18:11.808771 kubelet[2602]: E0813 07:18:11.808454 2602 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4a146f66-d797-41ab-a051-8def24c76aa4-kube-api-access-hfsxh podName:4a146f66-d797-41ab-a051-8def24c76aa4 nodeName:}" failed. No retries permitted until 2025-08-13 07:18:12.308401937 +0000 UTC m=+6.338084259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hfsxh" (UniqueName: "kubernetes.io/projected/4a146f66-d797-41ab-a051-8def24c76aa4-kube-api-access-hfsxh") pod "kube-proxy-h7qx4" (UID: "4a146f66-d797-41ab-a051-8def24c76aa4") : configmap "kube-root-ca.crt" not found Aug 13 07:18:12.121632 kubelet[2602]: I0813 07:18:12.120990 2602 status_manager.go:890] "Failed to get status for pod" podUID="cef88116-ca72-41ad-94c0-d2579c8e121c" pod="tigera-operator/tigera-operator-747864d56d-5zrbk" err="pods \"tigera-operator-747864d56d-5zrbk\" is forbidden: User \"system:node:ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal' and this object" Aug 13 07:18:12.121632 kubelet[2602]: W0813 07:18:12.121062 2602 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal' and this object Aug 13 07:18:12.121632 kubelet[2602]: E0813 07:18:12.121099 2602 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 13 07:18:12.122985 systemd[1]: Created slice kubepods-besteffort-podcef88116_ca72_41ad_94c0_d2579c8e121c.slice - libcontainer container kubepods-besteffort-podcef88116_ca72_41ad_94c0_d2579c8e121c.slice. Aug 13 07:18:12.200874 kubelet[2602]: I0813 07:18:12.200800 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v5cc\" (UniqueName: \"kubernetes.io/projected/cef88116-ca72-41ad-94c0-d2579c8e121c-kube-api-access-4v5cc\") pod \"tigera-operator-747864d56d-5zrbk\" (UID: \"cef88116-ca72-41ad-94c0-d2579c8e121c\") " pod="tigera-operator/tigera-operator-747864d56d-5zrbk" Aug 13 07:18:12.201086 kubelet[2602]: I0813 07:18:12.200943 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cef88116-ca72-41ad-94c0-d2579c8e121c-var-lib-calico\") pod \"tigera-operator-747864d56d-5zrbk\" (UID: \"cef88116-ca72-41ad-94c0-d2579c8e121c\") " pod="tigera-operator/tigera-operator-747864d56d-5zrbk" Aug 13 07:18:12.431069 containerd[1462]: time="2025-08-13T07:18:12.430922807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-5zrbk,Uid:cef88116-ca72-41ad-94c0-d2579c8e121c,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:18:12.468845 containerd[1462]: time="2025-08-13T07:18:12.468282042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:12.468845 containerd[1462]: time="2025-08-13T07:18:12.468387272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:12.468845 containerd[1462]: time="2025-08-13T07:18:12.468418333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:12.468845 containerd[1462]: time="2025-08-13T07:18:12.468601936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:12.507801 systemd[1]: Started cri-containerd-d478ee26423aa4b454f2cce6dad69bc2e7d4ddb2cdc93bd344c1f6330ae98eb6.scope - libcontainer container d478ee26423aa4b454f2cce6dad69bc2e7d4ddb2cdc93bd344c1f6330ae98eb6. Aug 13 07:18:12.552557 containerd[1462]: time="2025-08-13T07:18:12.551986361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h7qx4,Uid:4a146f66-d797-41ab-a051-8def24c76aa4,Namespace:kube-system,Attempt:0,}" Aug 13 07:18:12.569545 containerd[1462]: time="2025-08-13T07:18:12.568333518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-5zrbk,Uid:cef88116-ca72-41ad-94c0-d2579c8e121c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d478ee26423aa4b454f2cce6dad69bc2e7d4ddb2cdc93bd344c1f6330ae98eb6\"" Aug 13 07:18:12.573368 containerd[1462]: time="2025-08-13T07:18:12.573320165Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:18:12.596340 containerd[1462]: time="2025-08-13T07:18:12.596107417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:12.596340 containerd[1462]: time="2025-08-13T07:18:12.596192296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:12.596340 containerd[1462]: time="2025-08-13T07:18:12.596220778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:12.597572 containerd[1462]: time="2025-08-13T07:18:12.597476094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:12.621847 systemd[1]: Started cri-containerd-fba27fda2b43776abcb1f3f7486859cd3a175648c35386e555cb427701109388.scope - libcontainer container fba27fda2b43776abcb1f3f7486859cd3a175648c35386e555cb427701109388. Aug 13 07:18:12.658267 containerd[1462]: time="2025-08-13T07:18:12.658215548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h7qx4,Uid:4a146f66-d797-41ab-a051-8def24c76aa4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fba27fda2b43776abcb1f3f7486859cd3a175648c35386e555cb427701109388\"" Aug 13 07:18:12.661800 containerd[1462]: time="2025-08-13T07:18:12.661739964Z" level=info msg="CreateContainer within sandbox \"fba27fda2b43776abcb1f3f7486859cd3a175648c35386e555cb427701109388\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:18:12.680885 containerd[1462]: time="2025-08-13T07:18:12.680817703Z" level=info msg="CreateContainer within sandbox \"fba27fda2b43776abcb1f3f7486859cd3a175648c35386e555cb427701109388\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0573dffd034af0d3c0fd97897d50798094dff26bcc6d25ac2cd2006cc966d61f\"" Aug 13 07:18:12.682903 containerd[1462]: time="2025-08-13T07:18:12.681864229Z" level=info msg="StartContainer for \"0573dffd034af0d3c0fd97897d50798094dff26bcc6d25ac2cd2006cc966d61f\"" Aug 13 07:18:12.720831 systemd[1]: Started cri-containerd-0573dffd034af0d3c0fd97897d50798094dff26bcc6d25ac2cd2006cc966d61f.scope - libcontainer container 0573dffd034af0d3c0fd97897d50798094dff26bcc6d25ac2cd2006cc966d61f. Aug 13 07:18:12.761950 containerd[1462]: time="2025-08-13T07:18:12.761864093Z" level=info msg="StartContainer for \"0573dffd034af0d3c0fd97897d50798094dff26bcc6d25ac2cd2006cc966d61f\" returns successfully" Aug 13 07:18:13.320548 kubelet[2602]: I0813 07:18:13.319123 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h7qx4" podStartSLOduration=2.319099734 podStartE2EDuration="2.319099734s" podCreationTimestamp="2025-08-13 07:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:18:13.29958355 +0000 UTC m=+7.329265899" watchObservedRunningTime="2025-08-13 07:18:13.319099734 +0000 UTC m=+7.348782062" Aug 13 07:18:13.711588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount536135468.mount: Deactivated successfully. Aug 13 07:18:14.590406 containerd[1462]: time="2025-08-13T07:18:14.590329069Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:14.591903 containerd[1462]: time="2025-08-13T07:18:14.591812700Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:18:14.593656 containerd[1462]: time="2025-08-13T07:18:14.593577120Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:14.598362 containerd[1462]: time="2025-08-13T07:18:14.598278980Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:14.599561 containerd[1462]: time="2025-08-13T07:18:14.599318521Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.025934768s" Aug 13 07:18:14.599561 containerd[1462]: time="2025-08-13T07:18:14.599370631Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:18:14.602783 containerd[1462]: time="2025-08-13T07:18:14.602491528Z" level=info msg="CreateContainer within sandbox \"d478ee26423aa4b454f2cce6dad69bc2e7d4ddb2cdc93bd344c1f6330ae98eb6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:18:14.628328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2489469059.mount: Deactivated successfully. Aug 13 07:18:14.629588 containerd[1462]: time="2025-08-13T07:18:14.629436136Z" level=info msg="CreateContainer within sandbox \"d478ee26423aa4b454f2cce6dad69bc2e7d4ddb2cdc93bd344c1f6330ae98eb6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6db6807a67b602fc46bbfa155a7f7c643b94e1960362f7cf1ecdb94d44b41a06\"" Aug 13 07:18:14.631192 containerd[1462]: time="2025-08-13T07:18:14.630721190Z" level=info msg="StartContainer for \"6db6807a67b602fc46bbfa155a7f7c643b94e1960362f7cf1ecdb94d44b41a06\"" Aug 13 07:18:14.679848 systemd[1]: Started cri-containerd-6db6807a67b602fc46bbfa155a7f7c643b94e1960362f7cf1ecdb94d44b41a06.scope - libcontainer container 6db6807a67b602fc46bbfa155a7f7c643b94e1960362f7cf1ecdb94d44b41a06. Aug 13 07:18:14.716417 containerd[1462]: time="2025-08-13T07:18:14.715322941Z" level=info msg="StartContainer for \"6db6807a67b602fc46bbfa155a7f7c643b94e1960362f7cf1ecdb94d44b41a06\" returns successfully" Aug 13 07:18:15.305396 kubelet[2602]: I0813 07:18:15.304193 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-5zrbk" podStartSLOduration=1.276143367 podStartE2EDuration="3.304166489s" podCreationTimestamp="2025-08-13 07:18:12 +0000 UTC" firstStartedPulling="2025-08-13 07:18:12.572662118 +0000 UTC m=+6.602344437" lastFinishedPulling="2025-08-13 07:18:14.600685241 +0000 UTC m=+8.630367559" observedRunningTime="2025-08-13 07:18:15.303935017 +0000 UTC m=+9.333617345" watchObservedRunningTime="2025-08-13 07:18:15.304166489 +0000 UTC m=+9.333848816" Aug 13 07:18:22.287994 sudo[1743]: pam_unix(sudo:session): session closed for user root Aug 13 07:18:22.334908 sshd[1740]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:22.346621 systemd[1]: sshd@9-10.128.0.37:22-139.178.68.195:59020.service: Deactivated successfully. Aug 13 07:18:22.347645 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:18:22.352250 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:18:22.352943 systemd[1]: session-9.scope: Consumed 6.196s CPU time, 160.2M memory peak, 0B memory swap peak. Aug 13 07:18:22.357897 systemd-logind[1441]: Removed session 9. Aug 13 07:18:27.943711 systemd[1]: Created slice kubepods-besteffort-podc97e1622_4c42_4442_87ce_853951703caf.slice - libcontainer container kubepods-besteffort-podc97e1622_4c42_4442_87ce_853951703caf.slice. Aug 13 07:18:28.009834 kubelet[2602]: I0813 07:18:28.009763 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7rsn\" (UniqueName: \"kubernetes.io/projected/c97e1622-4c42-4442-87ce-853951703caf-kube-api-access-g7rsn\") pod \"calico-typha-7b687f584c-mm622\" (UID: \"c97e1622-4c42-4442-87ce-853951703caf\") " pod="calico-system/calico-typha-7b687f584c-mm622" Aug 13 07:18:28.010478 kubelet[2602]: I0813 07:18:28.009848 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c97e1622-4c42-4442-87ce-853951703caf-tigera-ca-bundle\") pod \"calico-typha-7b687f584c-mm622\" (UID: \"c97e1622-4c42-4442-87ce-853951703caf\") " pod="calico-system/calico-typha-7b687f584c-mm622" Aug 13 07:18:28.010478 kubelet[2602]: I0813 07:18:28.009876 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c97e1622-4c42-4442-87ce-853951703caf-typha-certs\") pod \"calico-typha-7b687f584c-mm622\" (UID: \"c97e1622-4c42-4442-87ce-853951703caf\") " pod="calico-system/calico-typha-7b687f584c-mm622" Aug 13 07:18:28.257834 containerd[1462]: time="2025-08-13T07:18:28.256985570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b687f584c-mm622,Uid:c97e1622-4c42-4442-87ce-853951703caf,Namespace:calico-system,Attempt:0,}" Aug 13 07:18:28.321134 containerd[1462]: time="2025-08-13T07:18:28.320912605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:28.321134 containerd[1462]: time="2025-08-13T07:18:28.320999011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:28.321134 containerd[1462]: time="2025-08-13T07:18:28.321024156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:28.321596 containerd[1462]: time="2025-08-13T07:18:28.321152968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:28.376958 systemd[1]: Started cri-containerd-f7750133234aefef3de7f2a6335c58aab205e4b64c93792e65b5030c5b4831c4.scope - libcontainer container f7750133234aefef3de7f2a6335c58aab205e4b64c93792e65b5030c5b4831c4. Aug 13 07:18:28.408382 systemd[1]: Created slice kubepods-besteffort-pod0b6ad8dc_2c2d_4d92_83f8_d7a1045ba61c.slice - libcontainer container kubepods-besteffort-pod0b6ad8dc_2c2d_4d92_83f8_d7a1045ba61c.slice. Aug 13 07:18:28.413447 kubelet[2602]: I0813 07:18:28.413391 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c-cni-log-dir\") pod \"calico-node-fhw5t\" (UID: \"0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c\") " pod="calico-system/calico-node-fhw5t" Aug 13 07:18:28.413447 kubelet[2602]: I0813 07:18:28.413440 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c-cni-net-dir\") pod \"calico-node-fhw5t\" (UID: \"0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c\") " pod="calico-system/calico-node-fhw5t" Aug 13 07:18:28.415605 kubelet[2602]: I0813 07:18:28.413472 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c-var-lib-calico\") pod \"calico-node-fhw5t\" (UID: \"0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c\") " pod="calico-system/calico-node-fhw5t" Aug 13 07:18:28.415605 kubelet[2602]: I0813 07:18:28.413498 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r6hh\" (UniqueName: \"kubernetes.io/projected/0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c-kube-api-access-5r6hh\") pod \"calico-node-fhw5t\" (UID: \"0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c\") " pod="calico-system/calico-node-fhw5t" Aug 13 07:18:28.415605 kubelet[2602]: I0813 07:18:28.414336 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c-flexvol-driver-host\") pod \"calico-node-fhw5t\" (UID: \"0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c\") " pod="calico-system/calico-node-fhw5t" Aug 13 07:18:28.415605 kubelet[2602]: I0813 07:18:28.414398 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c-lib-modules\") pod \"calico-node-fhw5t\" (UID: \"0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c\") " pod="calico-system/calico-node-fhw5t" Aug 13 07:18:28.415605 kubelet[2602]: I0813 07:18:28.414426 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c-policysync\") pod \"calico-node-fhw5t\" (UID: \"0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c\") " pod="calico-system/calico-node-fhw5t" Aug 13 07:18:28.417815 kubelet[2602]: I0813 07:18:28.414468 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c-tigera-ca-bundle\") pod \"calico-node-fhw5t\" (UID: \"0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c\") " pod="calico-system/calico-node-fhw5t" Aug 13 07:18:28.417815 kubelet[2602]: I0813 07:18:28.414500 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c-cni-bin-dir\") pod \"calico-node-fhw5t\" (UID: \"0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c\") " pod="calico-system/calico-node-fhw5t" Aug 13 07:18:28.417815 kubelet[2602]: I0813 07:18:28.414565 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c-node-certs\") pod \"calico-node-fhw5t\" (UID: \"0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c\") " pod="calico-system/calico-node-fhw5t" Aug 13 07:18:28.417815 kubelet[2602]: I0813 07:18:28.414593 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c-var-run-calico\") pod \"calico-node-fhw5t\" (UID: \"0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c\") " pod="calico-system/calico-node-fhw5t" Aug 13 07:18:28.417815 kubelet[2602]: I0813 07:18:28.414620 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c-xtables-lock\") pod \"calico-node-fhw5t\" (UID: \"0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c\") " pod="calico-system/calico-node-fhw5t" Aug 13 07:18:28.529614 kubelet[2602]: E0813 07:18:28.528610 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.529614 kubelet[2602]: W0813 07:18:28.528644 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.529614 kubelet[2602]: E0813 07:18:28.528710 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.550751 kubelet[2602]: E0813 07:18:28.550686 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.551028 kubelet[2602]: W0813 07:18:28.550972 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.551372 kubelet[2602]: E0813 07:18:28.551333 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.562870 containerd[1462]: time="2025-08-13T07:18:28.562716142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b687f584c-mm622,Uid:c97e1622-4c42-4442-87ce-853951703caf,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7750133234aefef3de7f2a6335c58aab205e4b64c93792e65b5030c5b4831c4\"" Aug 13 07:18:28.566975 containerd[1462]: time="2025-08-13T07:18:28.566919534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:18:28.717934 containerd[1462]: time="2025-08-13T07:18:28.716732541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fhw5t,Uid:0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c,Namespace:calico-system,Attempt:0,}" Aug 13 07:18:28.784172 containerd[1462]: time="2025-08-13T07:18:28.781050105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:28.784172 containerd[1462]: time="2025-08-13T07:18:28.781800045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:28.784172 containerd[1462]: time="2025-08-13T07:18:28.781837316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:28.784172 containerd[1462]: time="2025-08-13T07:18:28.782008427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:28.838833 systemd[1]: Started cri-containerd-f71b3c8c450cd7668c468094f02e59ff8167cfc34682b782a17f6d0022db8217.scope - libcontainer container f71b3c8c450cd7668c468094f02e59ff8167cfc34682b782a17f6d0022db8217. Aug 13 07:18:28.849557 kubelet[2602]: E0813 07:18:28.849287 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmnnx" podUID="9995649e-a9c2-4dd0-ab3a-469f68507e9a" Aug 13 07:18:28.913382 kubelet[2602]: E0813 07:18:28.913338 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.913594 kubelet[2602]: W0813 07:18:28.913393 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.913594 kubelet[2602]: E0813 07:18:28.913427 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.913997 kubelet[2602]: E0813 07:18:28.913882 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.913997 kubelet[2602]: W0813 07:18:28.913906 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.913997 kubelet[2602]: E0813 07:18:28.913948 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.917308 kubelet[2602]: E0813 07:18:28.915968 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.917308 kubelet[2602]: W0813 07:18:28.915990 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.917308 kubelet[2602]: E0813 07:18:28.917026 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.918722 kubelet[2602]: E0813 07:18:28.918698 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.918722 kubelet[2602]: W0813 07:18:28.918722 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.918883 kubelet[2602]: E0813 07:18:28.918744 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.919825 kubelet[2602]: E0813 07:18:28.919800 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.919825 kubelet[2602]: W0813 07:18:28.919824 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.920014 kubelet[2602]: E0813 07:18:28.919844 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.923045 kubelet[2602]: E0813 07:18:28.922499 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.923045 kubelet[2602]: W0813 07:18:28.922547 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.923045 kubelet[2602]: E0813 07:18:28.922570 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.923684 kubelet[2602]: E0813 07:18:28.923417 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.923684 kubelet[2602]: W0813 07:18:28.923439 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.923684 kubelet[2602]: E0813 07:18:28.923460 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.924203 kubelet[2602]: E0813 07:18:28.923812 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.924203 kubelet[2602]: W0813 07:18:28.923828 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.924203 kubelet[2602]: E0813 07:18:28.923847 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.925986 kubelet[2602]: E0813 07:18:28.925955 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.925986 kubelet[2602]: W0813 07:18:28.925982 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.926153 kubelet[2602]: E0813 07:18:28.926107 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.927159 kubelet[2602]: E0813 07:18:28.927133 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.927248 kubelet[2602]: W0813 07:18:28.927167 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.927248 kubelet[2602]: E0813 07:18:28.927187 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.929606 kubelet[2602]: E0813 07:18:28.929578 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.929606 kubelet[2602]: W0813 07:18:28.929603 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.929783 kubelet[2602]: E0813 07:18:28.929624 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.930804 kubelet[2602]: E0813 07:18:28.930729 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.930804 kubelet[2602]: W0813 07:18:28.930751 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.930804 kubelet[2602]: E0813 07:18:28.930771 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.931844 kubelet[2602]: E0813 07:18:28.931625 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.931844 kubelet[2602]: W0813 07:18:28.931645 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.931844 kubelet[2602]: E0813 07:18:28.931680 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.932436 kubelet[2602]: I0813 07:18:28.931716 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9995649e-a9c2-4dd0-ab3a-469f68507e9a-kubelet-dir\") pod \"csi-node-driver-vmnnx\" (UID: \"9995649e-a9c2-4dd0-ab3a-469f68507e9a\") " pod="calico-system/csi-node-driver-vmnnx" Aug 13 07:18:28.932436 kubelet[2602]: E0813 07:18:28.932313 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.932436 kubelet[2602]: W0813 07:18:28.932326 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.932436 kubelet[2602]: E0813 07:18:28.932363 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.934208 kubelet[2602]: E0813 07:18:28.934003 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.934208 kubelet[2602]: W0813 07:18:28.934027 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.934208 kubelet[2602]: E0813 07:18:28.934062 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.935118 kubelet[2602]: E0813 07:18:28.934529 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.935118 kubelet[2602]: W0813 07:18:28.934548 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.935485 kubelet[2602]: E0813 07:18:28.935356 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.936257 kubelet[2602]: E0813 07:18:28.936146 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.936615 kubelet[2602]: W0813 07:18:28.936592 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.937417 kubelet[2602]: E0813 07:18:28.937175 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.937417 kubelet[2602]: I0813 07:18:28.937224 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9995649e-a9c2-4dd0-ab3a-469f68507e9a-registration-dir\") pod \"csi-node-driver-vmnnx\" (UID: \"9995649e-a9c2-4dd0-ab3a-469f68507e9a\") " pod="calico-system/csi-node-driver-vmnnx" Aug 13 07:18:28.939396 kubelet[2602]: E0813 07:18:28.939067 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.939396 kubelet[2602]: W0813 07:18:28.939087 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.940003 kubelet[2602]: E0813 07:18:28.939662 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.940783 kubelet[2602]: E0813 07:18:28.940503 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.940783 kubelet[2602]: W0813 07:18:28.940588 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.941199 kubelet[2602]: E0813 07:18:28.940995 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.941991 kubelet[2602]: E0813 07:18:28.941925 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.942350 kubelet[2602]: W0813 07:18:28.942136 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.942350 kubelet[2602]: E0813 07:18:28.942243 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.942350 kubelet[2602]: I0813 07:18:28.942297 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9995649e-a9c2-4dd0-ab3a-469f68507e9a-socket-dir\") pod \"csi-node-driver-vmnnx\" (UID: \"9995649e-a9c2-4dd0-ab3a-469f68507e9a\") " pod="calico-system/csi-node-driver-vmnnx" Aug 13 07:18:28.943753 kubelet[2602]: E0813 07:18:28.943588 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.943753 kubelet[2602]: W0813 07:18:28.943607 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.944271 kubelet[2602]: E0813 07:18:28.944125 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.944478 kubelet[2602]: E0813 07:18:28.944450 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.944818 kubelet[2602]: W0813 07:18:28.944585 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.945274 containerd[1462]: time="2025-08-13T07:18:28.943916589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fhw5t,Uid:0b6ad8dc-2c2d-4d92-83f8-d7a1045ba61c,Namespace:calico-system,Attempt:0,} returns sandbox id \"f71b3c8c450cd7668c468094f02e59ff8167cfc34682b782a17f6d0022db8217\"" Aug 13 07:18:28.945398 kubelet[2602]: E0813 07:18:28.944973 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.946392 kubelet[2602]: E0813 07:18:28.946184 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.946392 kubelet[2602]: W0813 07:18:28.946204 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.946392 kubelet[2602]: E0813 07:18:28.946228 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.947733 kubelet[2602]: E0813 07:18:28.947599 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.947733 kubelet[2602]: W0813 07:18:28.947618 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.948135 kubelet[2602]: E0813 07:18:28.947969 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.948531 kubelet[2602]: E0813 07:18:28.948408 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.948531 kubelet[2602]: W0813 07:18:28.948425 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.948822 kubelet[2602]: E0813 07:18:28.948658 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.949248 kubelet[2602]: E0813 07:18:28.949114 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.949248 kubelet[2602]: W0813 07:18:28.949132 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.949559 kubelet[2602]: E0813 07:18:28.949439 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.950226 kubelet[2602]: E0813 07:18:28.950067 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.950226 kubelet[2602]: W0813 07:18:28.950087 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.950226 kubelet[2602]: E0813 07:18:28.950104 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.951702 kubelet[2602]: E0813 07:18:28.951467 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.951702 kubelet[2602]: W0813 07:18:28.951486 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.951702 kubelet[2602]: E0813 07:18:28.951556 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:28.952465 kubelet[2602]: E0813 07:18:28.952280 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:28.952465 kubelet[2602]: W0813 07:18:28.952299 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:28.952465 kubelet[2602]: E0813 07:18:28.952317 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.053174 kubelet[2602]: E0813 07:18:29.053124 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.055720 kubelet[2602]: W0813 07:18:29.055650 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.055865 kubelet[2602]: E0813 07:18:29.055732 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.056803 kubelet[2602]: E0813 07:18:29.056755 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.056899 kubelet[2602]: W0813 07:18:29.056825 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.056899 kubelet[2602]: E0813 07:18:29.056871 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.057398 kubelet[2602]: I0813 07:18:29.057366 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9995649e-a9c2-4dd0-ab3a-469f68507e9a-varrun\") pod \"csi-node-driver-vmnnx\" (UID: \"9995649e-a9c2-4dd0-ab3a-469f68507e9a\") " pod="calico-system/csi-node-driver-vmnnx" Aug 13 07:18:29.057555 kubelet[2602]: E0813 07:18:29.057536 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.057622 kubelet[2602]: W0813 07:18:29.057557 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.057622 kubelet[2602]: E0813 07:18:29.057578 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.058134 kubelet[2602]: E0813 07:18:29.058022 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.058134 kubelet[2602]: W0813 07:18:29.058041 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.058134 kubelet[2602]: E0813 07:18:29.058078 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.059078 kubelet[2602]: E0813 07:18:29.058618 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.059078 kubelet[2602]: W0813 07:18:29.058633 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.059078 kubelet[2602]: E0813 07:18:29.058667 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.059288 kubelet[2602]: E0813 07:18:29.059247 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.059288 kubelet[2602]: W0813 07:18:29.059265 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.059444 kubelet[2602]: E0813 07:18:29.059289 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.059444 kubelet[2602]: I0813 07:18:29.059323 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7xb6\" (UniqueName: \"kubernetes.io/projected/9995649e-a9c2-4dd0-ab3a-469f68507e9a-kube-api-access-k7xb6\") pod \"csi-node-driver-vmnnx\" (UID: \"9995649e-a9c2-4dd0-ab3a-469f68507e9a\") " pod="calico-system/csi-node-driver-vmnnx" Aug 13 07:18:29.059877 kubelet[2602]: E0813 07:18:29.059853 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.059877 kubelet[2602]: W0813 07:18:29.059875 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.060286 kubelet[2602]: E0813 07:18:29.060067 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.060369 kubelet[2602]: E0813 07:18:29.060350 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.060369 kubelet[2602]: W0813 07:18:29.060364 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.060550 kubelet[2602]: E0813 07:18:29.060490 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.060917 kubelet[2602]: E0813 07:18:29.060874 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.060917 kubelet[2602]: W0813 07:18:29.060892 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.061066 kubelet[2602]: E0813 07:18:29.061020 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.061325 kubelet[2602]: E0813 07:18:29.061306 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.061325 kubelet[2602]: W0813 07:18:29.061323 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.061451 kubelet[2602]: E0813 07:18:29.061422 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.061759 kubelet[2602]: E0813 07:18:29.061740 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.061759 kubelet[2602]: W0813 07:18:29.061757 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.061912 kubelet[2602]: E0813 07:18:29.061887 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.062206 kubelet[2602]: E0813 07:18:29.062165 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.062206 kubelet[2602]: W0813 07:18:29.062183 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.062334 kubelet[2602]: E0813 07:18:29.062307 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.062597 kubelet[2602]: E0813 07:18:29.062578 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.062597 kubelet[2602]: W0813 07:18:29.062596 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.062745 kubelet[2602]: E0813 07:18:29.062712 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.062992 kubelet[2602]: E0813 07:18:29.062973 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.062992 kubelet[2602]: W0813 07:18:29.062990 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.063125 kubelet[2602]: E0813 07:18:29.063013 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.063358 kubelet[2602]: E0813 07:18:29.063340 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.063358 kubelet[2602]: W0813 07:18:29.063358 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.063487 kubelet[2602]: E0813 07:18:29.063380 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.063788 kubelet[2602]: E0813 07:18:29.063768 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.063788 kubelet[2602]: W0813 07:18:29.063786 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.063918 kubelet[2602]: E0813 07:18:29.063808 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.065982 kubelet[2602]: E0813 07:18:29.064724 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.065982 kubelet[2602]: W0813 07:18:29.064742 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.065982 kubelet[2602]: E0813 07:18:29.064872 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.065982 kubelet[2602]: E0813 07:18:29.065096 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.065982 kubelet[2602]: W0813 07:18:29.065109 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.065982 kubelet[2602]: E0813 07:18:29.065125 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.065982 kubelet[2602]: E0813 07:18:29.065627 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.065982 kubelet[2602]: W0813 07:18:29.065642 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.065982 kubelet[2602]: E0813 07:18:29.065660 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.066584 kubelet[2602]: E0813 07:18:29.066115 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.066584 kubelet[2602]: W0813 07:18:29.066130 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.066584 kubelet[2602]: E0813 07:18:29.066148 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.066584 kubelet[2602]: E0813 07:18:29.066553 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.066584 kubelet[2602]: W0813 07:18:29.066568 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.066584 kubelet[2602]: E0813 07:18:29.066583 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.167543 kubelet[2602]: E0813 07:18:29.166666 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.167543 kubelet[2602]: W0813 07:18:29.166732 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.167543 kubelet[2602]: E0813 07:18:29.166766 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.167543 kubelet[2602]: E0813 07:18:29.167399 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.167543 kubelet[2602]: W0813 07:18:29.167418 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.168037 kubelet[2602]: E0813 07:18:29.167456 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.168037 kubelet[2602]: E0813 07:18:29.167921 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.168037 kubelet[2602]: W0813 07:18:29.167950 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.168037 kubelet[2602]: E0813 07:18:29.167974 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.168552 kubelet[2602]: E0813 07:18:29.168503 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.168552 kubelet[2602]: W0813 07:18:29.168552 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.168745 kubelet[2602]: E0813 07:18:29.168592 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.168983 kubelet[2602]: E0813 07:18:29.168958 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.168983 kubelet[2602]: W0813 07:18:29.168981 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.169133 kubelet[2602]: E0813 07:18:29.169007 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.169467 kubelet[2602]: E0813 07:18:29.169436 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.169467 kubelet[2602]: W0813 07:18:29.169460 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.169637 kubelet[2602]: E0813 07:18:29.169536 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.169929 kubelet[2602]: E0813 07:18:29.169864 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.169929 kubelet[2602]: W0813 07:18:29.169887 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.169929 kubelet[2602]: E0813 07:18:29.169906 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.170269 kubelet[2602]: E0813 07:18:29.170245 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.170269 kubelet[2602]: W0813 07:18:29.170268 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.170423 kubelet[2602]: E0813 07:18:29.170285 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.170682 kubelet[2602]: E0813 07:18:29.170652 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.170682 kubelet[2602]: W0813 07:18:29.170675 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.170840 kubelet[2602]: E0813 07:18:29.170692 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.172564 kubelet[2602]: E0813 07:18:29.172262 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.172564 kubelet[2602]: W0813 07:18:29.172286 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.172564 kubelet[2602]: E0813 07:18:29.172311 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.202681 kubelet[2602]: E0813 07:18:29.202643 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:29.202681 kubelet[2602]: W0813 07:18:29.202678 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:29.202917 kubelet[2602]: E0813 07:18:29.202709 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:29.701475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount966965837.mount: Deactivated successfully. Aug 13 07:18:30.212396 kubelet[2602]: E0813 07:18:30.211904 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmnnx" podUID="9995649e-a9c2-4dd0-ab3a-469f68507e9a" Aug 13 07:18:30.819386 containerd[1462]: time="2025-08-13T07:18:30.819313390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:30.820807 containerd[1462]: time="2025-08-13T07:18:30.820727783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:18:30.822536 containerd[1462]: time="2025-08-13T07:18:30.822407574Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:30.826150 containerd[1462]: time="2025-08-13T07:18:30.826075724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:30.827673 containerd[1462]: time="2025-08-13T07:18:30.827249110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.260234861s" Aug 13 07:18:30.827673 containerd[1462]: time="2025-08-13T07:18:30.827301996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:18:30.831112 containerd[1462]: time="2025-08-13T07:18:30.831076318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:18:30.855806 containerd[1462]: time="2025-08-13T07:18:30.855742551Z" level=info msg="CreateContainer within sandbox \"f7750133234aefef3de7f2a6335c58aab205e4b64c93792e65b5030c5b4831c4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:18:30.887187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3216681135.mount: Deactivated successfully. Aug 13 07:18:30.888427 containerd[1462]: time="2025-08-13T07:18:30.888232890Z" level=info msg="CreateContainer within sandbox \"f7750133234aefef3de7f2a6335c58aab205e4b64c93792e65b5030c5b4831c4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8a5b3ee0db892e2c12d4a8fd8d5a090e015c7c1a8c9d1ee08dc11473c36af6ac\"" Aug 13 07:18:30.890319 containerd[1462]: time="2025-08-13T07:18:30.890210838Z" level=info msg="StartContainer for \"8a5b3ee0db892e2c12d4a8fd8d5a090e015c7c1a8c9d1ee08dc11473c36af6ac\"" Aug 13 07:18:30.940854 systemd[1]: Started cri-containerd-8a5b3ee0db892e2c12d4a8fd8d5a090e015c7c1a8c9d1ee08dc11473c36af6ac.scope - libcontainer container 8a5b3ee0db892e2c12d4a8fd8d5a090e015c7c1a8c9d1ee08dc11473c36af6ac. Aug 13 07:18:31.002549 containerd[1462]: time="2025-08-13T07:18:31.002463778Z" level=info msg="StartContainer for \"8a5b3ee0db892e2c12d4a8fd8d5a090e015c7c1a8c9d1ee08dc11473c36af6ac\" returns successfully" Aug 13 07:18:31.432156 kubelet[2602]: I0813 07:18:31.432074 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b687f584c-mm622" podStartSLOduration=2.167335331 podStartE2EDuration="4.432051248s" podCreationTimestamp="2025-08-13 07:18:27 +0000 UTC" firstStartedPulling="2025-08-13 07:18:28.565317971 +0000 UTC m=+22.595000284" lastFinishedPulling="2025-08-13 07:18:30.83003388 +0000 UTC m=+24.859716201" observedRunningTime="2025-08-13 07:18:31.431830735 +0000 UTC m=+25.461513062" watchObservedRunningTime="2025-08-13 07:18:31.432051248 +0000 UTC m=+25.461733581" Aug 13 07:18:31.469876 kubelet[2602]: E0813 07:18:31.469827 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.470145 kubelet[2602]: W0813 07:18:31.469885 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.470145 kubelet[2602]: E0813 07:18:31.469918 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.470712 kubelet[2602]: E0813 07:18:31.470679 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.470823 kubelet[2602]: W0813 07:18:31.470742 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.470823 kubelet[2602]: E0813 07:18:31.470770 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.472456 kubelet[2602]: E0813 07:18:31.472415 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.472456 kubelet[2602]: W0813 07:18:31.472444 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.473621 kubelet[2602]: E0813 07:18:31.472471 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.473621 kubelet[2602]: E0813 07:18:31.473166 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.473621 kubelet[2602]: W0813 07:18:31.473184 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.473621 kubelet[2602]: E0813 07:18:31.473231 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.475771 kubelet[2602]: E0813 07:18:31.474792 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.475771 kubelet[2602]: W0813 07:18:31.474816 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.475771 kubelet[2602]: E0813 07:18:31.474843 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.475993 kubelet[2602]: E0813 07:18:31.475794 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.475993 kubelet[2602]: W0813 07:18:31.475815 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.475993 kubelet[2602]: E0813 07:18:31.475835 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.477824 kubelet[2602]: E0813 07:18:31.476869 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.477824 kubelet[2602]: W0813 07:18:31.476890 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.477824 kubelet[2602]: E0813 07:18:31.476909 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.478434 kubelet[2602]: E0813 07:18:31.478410 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.478434 kubelet[2602]: W0813 07:18:31.478432 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.478663 kubelet[2602]: E0813 07:18:31.478454 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.479501 kubelet[2602]: E0813 07:18:31.479462 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.479634 kubelet[2602]: W0813 07:18:31.479546 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.479634 kubelet[2602]: E0813 07:18:31.479570 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.480862 kubelet[2602]: E0813 07:18:31.480830 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.480955 kubelet[2602]: W0813 07:18:31.480868 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.480955 kubelet[2602]: E0813 07:18:31.480889 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.481462 kubelet[2602]: E0813 07:18:31.481438 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.481462 kubelet[2602]: W0813 07:18:31.481460 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.481626 kubelet[2602]: E0813 07:18:31.481479 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.483913 kubelet[2602]: E0813 07:18:31.483870 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.483913 kubelet[2602]: W0813 07:18:31.483895 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.484327 kubelet[2602]: E0813 07:18:31.483918 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.484411 kubelet[2602]: E0813 07:18:31.484337 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.484411 kubelet[2602]: W0813 07:18:31.484352 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.484411 kubelet[2602]: E0813 07:18:31.484371 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.485315 kubelet[2602]: E0813 07:18:31.485286 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.485315 kubelet[2602]: W0813 07:18:31.485311 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.485479 kubelet[2602]: E0813 07:18:31.485330 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.486580 kubelet[2602]: E0813 07:18:31.485867 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.486580 kubelet[2602]: W0813 07:18:31.485883 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.486580 kubelet[2602]: E0813 07:18:31.485911 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.491841 kubelet[2602]: E0813 07:18:31.491790 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.491841 kubelet[2602]: W0813 07:18:31.491826 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.492067 kubelet[2602]: E0813 07:18:31.491866 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.492344 kubelet[2602]: E0813 07:18:31.492316 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.492344 kubelet[2602]: W0813 07:18:31.492332 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.492455 kubelet[2602]: E0813 07:18:31.492354 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.492745 kubelet[2602]: E0813 07:18:31.492725 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.492835 kubelet[2602]: W0813 07:18:31.492745 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.492835 kubelet[2602]: E0813 07:18:31.492782 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.493185 kubelet[2602]: E0813 07:18:31.493162 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.493185 kubelet[2602]: W0813 07:18:31.493184 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.493323 kubelet[2602]: E0813 07:18:31.493211 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.495866 kubelet[2602]: E0813 07:18:31.495832 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.495866 kubelet[2602]: W0813 07:18:31.495866 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.496035 kubelet[2602]: E0813 07:18:31.495968 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.496278 kubelet[2602]: E0813 07:18:31.496257 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.496278 kubelet[2602]: W0813 07:18:31.496277 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.496420 kubelet[2602]: E0813 07:18:31.496374 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.496703 kubelet[2602]: E0813 07:18:31.496679 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.496703 kubelet[2602]: W0813 07:18:31.496702 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.496856 kubelet[2602]: E0813 07:18:31.496822 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.497090 kubelet[2602]: E0813 07:18:31.497068 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.497090 kubelet[2602]: W0813 07:18:31.497089 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.497610 kubelet[2602]: E0813 07:18:31.497111 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.497610 kubelet[2602]: E0813 07:18:31.497503 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.497610 kubelet[2602]: W0813 07:18:31.497539 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.497610 kubelet[2602]: E0813 07:18:31.497574 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.499624 kubelet[2602]: E0813 07:18:31.499595 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.499624 kubelet[2602]: W0813 07:18:31.499623 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.499779 kubelet[2602]: E0813 07:18:31.499722 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.500559 kubelet[2602]: E0813 07:18:31.500157 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.500559 kubelet[2602]: W0813 07:18:31.500191 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.500559 kubelet[2602]: E0813 07:18:31.500286 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.500977 kubelet[2602]: E0813 07:18:31.500613 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.500977 kubelet[2602]: W0813 07:18:31.500627 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.500977 kubelet[2602]: E0813 07:18:31.500662 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.501142 kubelet[2602]: E0813 07:18:31.501030 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.501142 kubelet[2602]: W0813 07:18:31.501045 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.501142 kubelet[2602]: E0813 07:18:31.501077 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.501538 kubelet[2602]: E0813 07:18:31.501499 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.501538 kubelet[2602]: W0813 07:18:31.501533 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.501685 kubelet[2602]: E0813 07:18:31.501567 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.503631 kubelet[2602]: E0813 07:18:31.503605 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.503631 kubelet[2602]: W0813 07:18:31.503629 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.503798 kubelet[2602]: E0813 07:18:31.503653 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.504092 kubelet[2602]: E0813 07:18:31.504070 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.504092 kubelet[2602]: W0813 07:18:31.504090 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.504237 kubelet[2602]: E0813 07:18:31.504193 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.506659 kubelet[2602]: E0813 07:18:31.506601 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.506659 kubelet[2602]: W0813 07:18:31.506625 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.506830 kubelet[2602]: E0813 07:18:31.506665 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.507191 kubelet[2602]: E0813 07:18:31.507166 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:31.507191 kubelet[2602]: W0813 07:18:31.507190 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:31.507353 kubelet[2602]: E0813 07:18:31.507209 2602 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:31.766948 containerd[1462]: time="2025-08-13T07:18:31.765777366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:31.768542 containerd[1462]: time="2025-08-13T07:18:31.768393512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:18:31.769819 containerd[1462]: time="2025-08-13T07:18:31.769614931Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:31.773817 containerd[1462]: time="2025-08-13T07:18:31.773750910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:31.775164 containerd[1462]: time="2025-08-13T07:18:31.775116704Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 943.606923ms" Aug 13 07:18:31.775536 containerd[1462]: time="2025-08-13T07:18:31.775169761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:18:31.779056 containerd[1462]: time="2025-08-13T07:18:31.778554155Z" level=info msg="CreateContainer within sandbox \"f71b3c8c450cd7668c468094f02e59ff8167cfc34682b782a17f6d0022db8217\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:18:31.794882 containerd[1462]: time="2025-08-13T07:18:31.794825173Z" level=info msg="CreateContainer within sandbox \"f71b3c8c450cd7668c468094f02e59ff8167cfc34682b782a17f6d0022db8217\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"747f500f31eaff4e5c3edc2d985b2b9058e0edf25b8939987944621fc2e4f340\"" Aug 13 07:18:31.796286 containerd[1462]: time="2025-08-13T07:18:31.796229446Z" level=info msg="StartContainer for \"747f500f31eaff4e5c3edc2d985b2b9058e0edf25b8939987944621fc2e4f340\"" Aug 13 07:18:31.840828 systemd[1]: Started cri-containerd-747f500f31eaff4e5c3edc2d985b2b9058e0edf25b8939987944621fc2e4f340.scope - libcontainer container 747f500f31eaff4e5c3edc2d985b2b9058e0edf25b8939987944621fc2e4f340. Aug 13 07:18:31.893538 containerd[1462]: time="2025-08-13T07:18:31.893395828Z" level=info msg="StartContainer for \"747f500f31eaff4e5c3edc2d985b2b9058e0edf25b8939987944621fc2e4f340\" returns successfully" Aug 13 07:18:31.914506 systemd[1]: cri-containerd-747f500f31eaff4e5c3edc2d985b2b9058e0edf25b8939987944621fc2e4f340.scope: Deactivated successfully. Aug 13 07:18:31.955935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-747f500f31eaff4e5c3edc2d985b2b9058e0edf25b8939987944621fc2e4f340-rootfs.mount: Deactivated successfully. Aug 13 07:18:32.213695 kubelet[2602]: E0813 07:18:32.213643 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmnnx" podUID="9995649e-a9c2-4dd0-ab3a-469f68507e9a" Aug 13 07:18:32.375394 kubelet[2602]: I0813 07:18:32.375358 2602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:18:32.634541 containerd[1462]: time="2025-08-13T07:18:32.634433999Z" level=info msg="shim disconnected" id=747f500f31eaff4e5c3edc2d985b2b9058e0edf25b8939987944621fc2e4f340 namespace=k8s.io Aug 13 07:18:32.634780 containerd[1462]: time="2025-08-13T07:18:32.634665750Z" level=warning msg="cleaning up after shim disconnected" id=747f500f31eaff4e5c3edc2d985b2b9058e0edf25b8939987944621fc2e4f340 namespace=k8s.io Aug 13 07:18:32.634780 containerd[1462]: time="2025-08-13T07:18:32.634688210Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:18:33.380979 containerd[1462]: time="2025-08-13T07:18:33.380932444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:18:34.214434 kubelet[2602]: E0813 07:18:34.214381 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmnnx" podUID="9995649e-a9c2-4dd0-ab3a-469f68507e9a" Aug 13 07:18:36.213728 kubelet[2602]: E0813 07:18:36.213574 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmnnx" podUID="9995649e-a9c2-4dd0-ab3a-469f68507e9a" Aug 13 07:18:36.424188 containerd[1462]: time="2025-08-13T07:18:36.424117569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:36.425616 containerd[1462]: time="2025-08-13T07:18:36.425558320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:18:36.427868 containerd[1462]: time="2025-08-13T07:18:36.427388796Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:36.432247 containerd[1462]: time="2025-08-13T07:18:36.432140618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:36.433500 containerd[1462]: time="2025-08-13T07:18:36.433304172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.052317441s" Aug 13 07:18:36.433500 containerd[1462]: time="2025-08-13T07:18:36.433363479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:18:36.436837 containerd[1462]: time="2025-08-13T07:18:36.436783737Z" level=info msg="CreateContainer within sandbox \"f71b3c8c450cd7668c468094f02e59ff8167cfc34682b782a17f6d0022db8217\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:18:36.464646 containerd[1462]: time="2025-08-13T07:18:36.463863916Z" level=info msg="CreateContainer within sandbox \"f71b3c8c450cd7668c468094f02e59ff8167cfc34682b782a17f6d0022db8217\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"807c1edb4862b17fa8fb103f0d5dbc552fa383a0dd4ff7fb00e586a1e7241ba7\"" Aug 13 07:18:36.468303 containerd[1462]: time="2025-08-13T07:18:36.468201709Z" level=info msg="StartContainer for \"807c1edb4862b17fa8fb103f0d5dbc552fa383a0dd4ff7fb00e586a1e7241ba7\"" Aug 13 07:18:36.530793 systemd[1]: Started cri-containerd-807c1edb4862b17fa8fb103f0d5dbc552fa383a0dd4ff7fb00e586a1e7241ba7.scope - libcontainer container 807c1edb4862b17fa8fb103f0d5dbc552fa383a0dd4ff7fb00e586a1e7241ba7. Aug 13 07:18:36.572600 containerd[1462]: time="2025-08-13T07:18:36.572506328Z" level=info msg="StartContainer for \"807c1edb4862b17fa8fb103f0d5dbc552fa383a0dd4ff7fb00e586a1e7241ba7\" returns successfully" Aug 13 07:18:37.705365 containerd[1462]: time="2025-08-13T07:18:37.705269618Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:18:37.713399 systemd[1]: cri-containerd-807c1edb4862b17fa8fb103f0d5dbc552fa383a0dd4ff7fb00e586a1e7241ba7.scope: Deactivated successfully. Aug 13 07:18:37.758502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-807c1edb4862b17fa8fb103f0d5dbc552fa383a0dd4ff7fb00e586a1e7241ba7-rootfs.mount: Deactivated successfully. Aug 13 07:18:37.777414 kubelet[2602]: I0813 07:18:37.777351 2602 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 07:18:37.861278 systemd[1]: Created slice kubepods-burstable-pod84e92942_4591_4943_868f_92a2efe7e6af.slice - libcontainer container kubepods-burstable-pod84e92942_4591_4943_868f_92a2efe7e6af.slice. Aug 13 07:18:37.891494 kubelet[2602]: W0813 07:18:37.891430 2602 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal' and this object Aug 13 07:18:37.891876 systemd[1]: Created slice kubepods-besteffort-pod454544c9_e57d_4404_ae95_88b611efc21a.slice - libcontainer container kubepods-besteffort-pod454544c9_e57d_4404_ae95_88b611efc21a.slice. Aug 13 07:18:37.892381 kubelet[2602]: E0813 07:18:37.892315 2602 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 13 07:18:37.923922 systemd[1]: Created slice kubepods-burstable-podd2864426_4b9a_4a74_b95d_9856eb5042a1.slice - libcontainer container kubepods-burstable-podd2864426_4b9a_4a74_b95d_9856eb5042a1.slice. Aug 13 07:18:37.941329 kubelet[2602]: I0813 07:18:37.940919 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b8b0bd01-1905-4ea2-9587-6ddd1435f3f6-calico-apiserver-certs\") pod \"calico-apiserver-5fb794d684-rjd8h\" (UID: \"b8b0bd01-1905-4ea2-9587-6ddd1435f3f6\") " pod="calico-apiserver/calico-apiserver-5fb794d684-rjd8h" Aug 13 07:18:37.941329 kubelet[2602]: I0813 07:18:37.940991 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvknv\" (UniqueName: \"kubernetes.io/projected/b8b0bd01-1905-4ea2-9587-6ddd1435f3f6-kube-api-access-cvknv\") pod \"calico-apiserver-5fb794d684-rjd8h\" (UID: \"b8b0bd01-1905-4ea2-9587-6ddd1435f3f6\") " pod="calico-apiserver/calico-apiserver-5fb794d684-rjd8h" Aug 13 07:18:37.948250 systemd[1]: Created slice kubepods-besteffort-pod8937aae4_009b_4f60_9764_2d5d28342995.slice - libcontainer container kubepods-besteffort-pod8937aae4_009b_4f60_9764_2d5d28342995.slice. Aug 13 07:18:37.961980 systemd[1]: Created slice kubepods-besteffort-podc4211a64_a195_4abd_8c5a_5097d18bfd52.slice - libcontainer container kubepods-besteffort-podc4211a64_a195_4abd_8c5a_5097d18bfd52.slice. Aug 13 07:18:37.980221 systemd[1]: Created slice kubepods-besteffort-podb8b0bd01_1905_4ea2_9587_6ddd1435f3f6.slice - libcontainer container kubepods-besteffort-podb8b0bd01_1905_4ea2_9587_6ddd1435f3f6.slice. Aug 13 07:18:38.000815 systemd[1]: Created slice kubepods-besteffort-podd0cf14fd_3d11_43c2_a719_49dbd30906de.slice - libcontainer container kubepods-besteffort-podd0cf14fd_3d11_43c2_a719_49dbd30906de.slice. Aug 13 07:18:38.042733 kubelet[2602]: I0813 07:18:38.042628 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4211a64-a195-4abd-8c5a-5097d18bfd52-whisker-backend-key-pair\") pod \"whisker-65577c7dd-fkmsm\" (UID: \"c4211a64-a195-4abd-8c5a-5097d18bfd52\") " pod="calico-system/whisker-65577c7dd-fkmsm" Aug 13 07:18:38.042733 kubelet[2602]: I0813 07:18:38.042701 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qplp\" (UniqueName: \"kubernetes.io/projected/d2864426-4b9a-4a74-b95d-9856eb5042a1-kube-api-access-4qplp\") pod \"coredns-668d6bf9bc-z22sd\" (UID: \"d2864426-4b9a-4a74-b95d-9856eb5042a1\") " pod="kube-system/coredns-668d6bf9bc-z22sd" Aug 13 07:18:38.042733 kubelet[2602]: I0813 07:18:38.042733 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d0cf14fd-3d11-43c2-a719-49dbd30906de-goldmane-key-pair\") pod \"goldmane-768f4c5c69-bbxp6\" (UID: \"d0cf14fd-3d11-43c2-a719-49dbd30906de\") " pod="calico-system/goldmane-768f4c5c69-bbxp6" Aug 13 07:18:38.042733 kubelet[2602]: I0813 07:18:38.042764 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/454544c9-e57d-4404-ae95-88b611efc21a-tigera-ca-bundle\") pod \"calico-kube-controllers-6fbc6d7cb9-xtnlk\" (UID: \"454544c9-e57d-4404-ae95-88b611efc21a\") " pod="calico-system/calico-kube-controllers-6fbc6d7cb9-xtnlk" Aug 13 07:18:38.043410 kubelet[2602]: I0813 07:18:38.042800 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp5lj\" (UniqueName: \"kubernetes.io/projected/d0cf14fd-3d11-43c2-a719-49dbd30906de-kube-api-access-rp5lj\") pod \"goldmane-768f4c5c69-bbxp6\" (UID: \"d0cf14fd-3d11-43c2-a719-49dbd30906de\") " pod="calico-system/goldmane-768f4c5c69-bbxp6" Aug 13 07:18:38.043410 kubelet[2602]: I0813 07:18:38.042836 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8rz4\" (UniqueName: \"kubernetes.io/projected/454544c9-e57d-4404-ae95-88b611efc21a-kube-api-access-k8rz4\") pod \"calico-kube-controllers-6fbc6d7cb9-xtnlk\" (UID: \"454544c9-e57d-4404-ae95-88b611efc21a\") " pod="calico-system/calico-kube-controllers-6fbc6d7cb9-xtnlk" Aug 13 07:18:38.043410 kubelet[2602]: I0813 07:18:38.042887 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4211a64-a195-4abd-8c5a-5097d18bfd52-whisker-ca-bundle\") pod \"whisker-65577c7dd-fkmsm\" (UID: \"c4211a64-a195-4abd-8c5a-5097d18bfd52\") " pod="calico-system/whisker-65577c7dd-fkmsm" Aug 13 07:18:38.043410 kubelet[2602]: I0813 07:18:38.042916 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84e92942-4591-4943-868f-92a2efe7e6af-config-volume\") pod \"coredns-668d6bf9bc-s4zz2\" (UID: \"84e92942-4591-4943-868f-92a2efe7e6af\") " pod="kube-system/coredns-668d6bf9bc-s4zz2" Aug 13 07:18:38.043410 kubelet[2602]: I0813 07:18:38.042949 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2864426-4b9a-4a74-b95d-9856eb5042a1-config-volume\") pod \"coredns-668d6bf9bc-z22sd\" (UID: \"d2864426-4b9a-4a74-b95d-9856eb5042a1\") " pod="kube-system/coredns-668d6bf9bc-z22sd" Aug 13 07:18:38.043734 kubelet[2602]: I0813 07:18:38.042977 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0cf14fd-3d11-43c2-a719-49dbd30906de-config\") pod \"goldmane-768f4c5c69-bbxp6\" (UID: \"d0cf14fd-3d11-43c2-a719-49dbd30906de\") " pod="calico-system/goldmane-768f4c5c69-bbxp6" Aug 13 07:18:38.043734 kubelet[2602]: I0813 07:18:38.043008 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlwpt\" (UniqueName: \"kubernetes.io/projected/8937aae4-009b-4f60-9764-2d5d28342995-kube-api-access-nlwpt\") pod \"calico-apiserver-5fb794d684-84gfw\" (UID: \"8937aae4-009b-4f60-9764-2d5d28342995\") " pod="calico-apiserver/calico-apiserver-5fb794d684-84gfw" Aug 13 07:18:38.043734 kubelet[2602]: I0813 07:18:38.043042 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sffc\" (UniqueName: \"kubernetes.io/projected/84e92942-4591-4943-868f-92a2efe7e6af-kube-api-access-9sffc\") pod \"coredns-668d6bf9bc-s4zz2\" (UID: \"84e92942-4591-4943-868f-92a2efe7e6af\") " pod="kube-system/coredns-668d6bf9bc-s4zz2" Aug 13 07:18:38.043734 kubelet[2602]: I0813 07:18:38.043075 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8937aae4-009b-4f60-9764-2d5d28342995-calico-apiserver-certs\") pod \"calico-apiserver-5fb794d684-84gfw\" (UID: \"8937aae4-009b-4f60-9764-2d5d28342995\") " pod="calico-apiserver/calico-apiserver-5fb794d684-84gfw" Aug 13 07:18:38.043734 kubelet[2602]: I0813 07:18:38.043132 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8f7g\" (UniqueName: \"kubernetes.io/projected/c4211a64-a195-4abd-8c5a-5097d18bfd52-kube-api-access-b8f7g\") pod \"whisker-65577c7dd-fkmsm\" (UID: \"c4211a64-a195-4abd-8c5a-5097d18bfd52\") " pod="calico-system/whisker-65577c7dd-fkmsm" Aug 13 07:18:38.044042 kubelet[2602]: I0813 07:18:38.043192 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0cf14fd-3d11-43c2-a719-49dbd30906de-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-bbxp6\" (UID: \"d0cf14fd-3d11-43c2-a719-49dbd30906de\") " pod="calico-system/goldmane-768f4c5c69-bbxp6" Aug 13 07:18:38.233040 systemd[1]: Created slice kubepods-besteffort-pod9995649e_a9c2_4dd0_ab3a_469f68507e9a.slice - libcontainer container kubepods-besteffort-pod9995649e_a9c2_4dd0_ab3a_469f68507e9a.slice. Aug 13 07:18:38.235349 containerd[1462]: time="2025-08-13T07:18:38.234698116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z22sd,Uid:d2864426-4b9a-4a74-b95d-9856eb5042a1,Namespace:kube-system,Attempt:0,}" Aug 13 07:18:38.238431 containerd[1462]: time="2025-08-13T07:18:38.238386738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmnnx,Uid:9995649e-a9c2-4dd0-ab3a-469f68507e9a,Namespace:calico-system,Attempt:0,}" Aug 13 07:18:38.271575 containerd[1462]: time="2025-08-13T07:18:38.271445308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65577c7dd-fkmsm,Uid:c4211a64-a195-4abd-8c5a-5097d18bfd52,Namespace:calico-system,Attempt:0,}" Aug 13 07:18:38.313633 containerd[1462]: time="2025-08-13T07:18:38.313303911Z" level=info msg="shim disconnected" id=807c1edb4862b17fa8fb103f0d5dbc552fa383a0dd4ff7fb00e586a1e7241ba7 namespace=k8s.io Aug 13 07:18:38.313633 containerd[1462]: time="2025-08-13T07:18:38.313370446Z" level=warning msg="cleaning up after shim disconnected" id=807c1edb4862b17fa8fb103f0d5dbc552fa383a0dd4ff7fb00e586a1e7241ba7 namespace=k8s.io Aug 13 07:18:38.313633 containerd[1462]: time="2025-08-13T07:18:38.313385485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:18:38.323763 containerd[1462]: time="2025-08-13T07:18:38.323711459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-bbxp6,Uid:d0cf14fd-3d11-43c2-a719-49dbd30906de,Namespace:calico-system,Attempt:0,}" Aug 13 07:18:38.416805 containerd[1462]: time="2025-08-13T07:18:38.416744981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:18:38.482818 containerd[1462]: time="2025-08-13T07:18:38.482696508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s4zz2,Uid:84e92942-4591-4943-868f-92a2efe7e6af,Namespace:kube-system,Attempt:0,}" Aug 13 07:18:38.511850 containerd[1462]: time="2025-08-13T07:18:38.511630682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fbc6d7cb9-xtnlk,Uid:454544c9-e57d-4404-ae95-88b611efc21a,Namespace:calico-system,Attempt:0,}" Aug 13 07:18:38.662243 containerd[1462]: time="2025-08-13T07:18:38.662148963Z" level=error msg="Failed to destroy network for sandbox \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.669084 containerd[1462]: time="2025-08-13T07:18:38.668997367Z" level=error msg="encountered an error cleaning up failed sandbox \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.670715 containerd[1462]: time="2025-08-13T07:18:38.669781091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-bbxp6,Uid:d0cf14fd-3d11-43c2-a719-49dbd30906de,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.671645 containerd[1462]: time="2025-08-13T07:18:38.669711626Z" level=error msg="Failed to destroy network for sandbox \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.673991 kubelet[2602]: E0813 07:18:38.672584 2602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.673991 kubelet[2602]: E0813 07:18:38.672682 2602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-bbxp6" Aug 13 07:18:38.673991 kubelet[2602]: E0813 07:18:38.672753 2602 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-bbxp6" Aug 13 07:18:38.674271 kubelet[2602]: E0813 07:18:38.672825 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-bbxp6_calico-system(d0cf14fd-3d11-43c2-a719-49dbd30906de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-bbxp6_calico-system(d0cf14fd-3d11-43c2-a719-49dbd30906de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-bbxp6" podUID="d0cf14fd-3d11-43c2-a719-49dbd30906de" Aug 13 07:18:38.675016 containerd[1462]: time="2025-08-13T07:18:38.674947235Z" level=error msg="encountered an error cleaning up failed sandbox \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.675752 containerd[1462]: time="2025-08-13T07:18:38.675207025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z22sd,Uid:d2864426-4b9a-4a74-b95d-9856eb5042a1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.675862 kubelet[2602]: E0813 07:18:38.675463 2602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.675862 kubelet[2602]: E0813 07:18:38.675589 2602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z22sd" Aug 13 07:18:38.675862 kubelet[2602]: E0813 07:18:38.675630 2602 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z22sd" Aug 13 07:18:38.676046 kubelet[2602]: E0813 07:18:38.675685 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-z22sd_kube-system(d2864426-4b9a-4a74-b95d-9856eb5042a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-z22sd_kube-system(d2864426-4b9a-4a74-b95d-9856eb5042a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z22sd" podUID="d2864426-4b9a-4a74-b95d-9856eb5042a1" Aug 13 07:18:38.689754 containerd[1462]: time="2025-08-13T07:18:38.689351325Z" level=error msg="Failed to destroy network for sandbox \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.692127 containerd[1462]: time="2025-08-13T07:18:38.691742993Z" level=error msg="encountered an error cleaning up failed sandbox \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.692669 containerd[1462]: time="2025-08-13T07:18:38.692625375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65577c7dd-fkmsm,Uid:c4211a64-a195-4abd-8c5a-5097d18bfd52,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.693405 kubelet[2602]: E0813 07:18:38.693243 2602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.693405 kubelet[2602]: E0813 07:18:38.693322 2602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65577c7dd-fkmsm" Aug 13 07:18:38.693405 kubelet[2602]: E0813 07:18:38.693354 2602 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65577c7dd-fkmsm" Aug 13 07:18:38.693818 kubelet[2602]: E0813 07:18:38.693423 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65577c7dd-fkmsm_calico-system(c4211a64-a195-4abd-8c5a-5097d18bfd52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65577c7dd-fkmsm_calico-system(c4211a64-a195-4abd-8c5a-5097d18bfd52)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65577c7dd-fkmsm" podUID="c4211a64-a195-4abd-8c5a-5097d18bfd52" Aug 13 07:18:38.694954 containerd[1462]: time="2025-08-13T07:18:38.694584027Z" level=error msg="Failed to destroy network for sandbox \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.696243 containerd[1462]: time="2025-08-13T07:18:38.696091799Z" level=error msg="encountered an error cleaning up failed sandbox \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.697178 containerd[1462]: time="2025-08-13T07:18:38.696433965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmnnx,Uid:9995649e-a9c2-4dd0-ab3a-469f68507e9a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.698275 kubelet[2602]: E0813 07:18:38.697827 2602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.698275 kubelet[2602]: E0813 07:18:38.697908 2602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vmnnx" Aug 13 07:18:38.698275 kubelet[2602]: E0813 07:18:38.697947 2602 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vmnnx" Aug 13 07:18:38.698493 kubelet[2602]: E0813 07:18:38.698032 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vmnnx_calico-system(9995649e-a9c2-4dd0-ab3a-469f68507e9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vmnnx_calico-system(9995649e-a9c2-4dd0-ab3a-469f68507e9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vmnnx" podUID="9995649e-a9c2-4dd0-ab3a-469f68507e9a" Aug 13 07:18:38.790434 containerd[1462]: time="2025-08-13T07:18:38.789756835Z" level=error msg="Failed to destroy network for sandbox \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.790434 containerd[1462]: time="2025-08-13T07:18:38.790238519Z" level=error msg="encountered an error cleaning up failed sandbox \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.790434 containerd[1462]: time="2025-08-13T07:18:38.790309725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fbc6d7cb9-xtnlk,Uid:454544c9-e57d-4404-ae95-88b611efc21a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.795542 kubelet[2602]: E0813 07:18:38.792673 2602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.795542 kubelet[2602]: E0813 07:18:38.792750 2602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fbc6d7cb9-xtnlk" Aug 13 07:18:38.795542 kubelet[2602]: E0813 07:18:38.792783 2602 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fbc6d7cb9-xtnlk" Aug 13 07:18:38.796154 containerd[1462]: time="2025-08-13T07:18:38.794431364Z" level=error msg="Failed to destroy network for sandbox \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.795984 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423-shm.mount: Deactivated successfully. Aug 13 07:18:38.796581 kubelet[2602]: E0813 07:18:38.792844 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fbc6d7cb9-xtnlk_calico-system(454544c9-e57d-4404-ae95-88b611efc21a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fbc6d7cb9-xtnlk_calico-system(454544c9-e57d-4404-ae95-88b611efc21a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fbc6d7cb9-xtnlk" podUID="454544c9-e57d-4404-ae95-88b611efc21a" Aug 13 07:18:38.802168 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8-shm.mount: Deactivated successfully. Aug 13 07:18:38.806359 kubelet[2602]: E0813 07:18:38.805444 2602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.806359 kubelet[2602]: E0813 07:18:38.805541 2602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s4zz2" Aug 13 07:18:38.806359 kubelet[2602]: E0813 07:18:38.805578 2602 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s4zz2" Aug 13 07:18:38.806602 containerd[1462]: time="2025-08-13T07:18:38.802892273Z" level=error msg="encountered an error cleaning up failed sandbox \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.806602 containerd[1462]: time="2025-08-13T07:18:38.802994571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s4zz2,Uid:84e92942-4591-4943-868f-92a2efe7e6af,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:38.806726 kubelet[2602]: E0813 07:18:38.805634 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s4zz2_kube-system(84e92942-4591-4943-868f-92a2efe7e6af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s4zz2_kube-system(84e92942-4591-4943-868f-92a2efe7e6af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s4zz2" podUID="84e92942-4591-4943-868f-92a2efe7e6af" Aug 13 07:18:39.071111 kubelet[2602]: E0813 07:18:39.070941 2602 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Aug 13 07:18:39.071111 kubelet[2602]: E0813 07:18:39.070995 2602 projected.go:194] Error preparing data for projected volume kube-api-access-cvknv for pod calico-apiserver/calico-apiserver-5fb794d684-rjd8h: failed to sync configmap cache: timed out waiting for the condition Aug 13 07:18:39.071111 kubelet[2602]: E0813 07:18:39.071078 2602 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b8b0bd01-1905-4ea2-9587-6ddd1435f3f6-kube-api-access-cvknv podName:b8b0bd01-1905-4ea2-9587-6ddd1435f3f6 nodeName:}" failed. No retries permitted until 2025-08-13 07:18:39.571052947 +0000 UTC m=+33.600735271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cvknv" (UniqueName: "kubernetes.io/projected/b8b0bd01-1905-4ea2-9587-6ddd1435f3f6-kube-api-access-cvknv") pod "calico-apiserver-5fb794d684-rjd8h" (UID: "b8b0bd01-1905-4ea2-9587-6ddd1435f3f6") : failed to sync configmap cache: timed out waiting for the condition Aug 13 07:18:39.159271 containerd[1462]: time="2025-08-13T07:18:39.159215766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fb794d684-84gfw,Uid:8937aae4-009b-4f60-9764-2d5d28342995,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:18:39.258544 containerd[1462]: time="2025-08-13T07:18:39.258445139Z" level=error msg="Failed to destroy network for sandbox \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:39.258954 containerd[1462]: time="2025-08-13T07:18:39.258911225Z" level=error msg="encountered an error cleaning up failed sandbox \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:39.259062 containerd[1462]: time="2025-08-13T07:18:39.258986901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fb794d684-84gfw,Uid:8937aae4-009b-4f60-9764-2d5d28342995,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:39.259348 kubelet[2602]: E0813 07:18:39.259286 2602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:39.259448 kubelet[2602]: E0813 07:18:39.259367 2602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fb794d684-84gfw" Aug 13 07:18:39.259448 kubelet[2602]: E0813 07:18:39.259400 2602 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fb794d684-84gfw" Aug 13 07:18:39.259610 kubelet[2602]: E0813 07:18:39.259473 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fb794d684-84gfw_calico-apiserver(8937aae4-009b-4f60-9764-2d5d28342995)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fb794d684-84gfw_calico-apiserver(8937aae4-009b-4f60-9764-2d5d28342995)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fb794d684-84gfw" podUID="8937aae4-009b-4f60-9764-2d5d28342995" Aug 13 07:18:39.434646 kubelet[2602]: I0813 07:18:39.433983 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Aug 13 07:18:39.442733 containerd[1462]: time="2025-08-13T07:18:39.442683642Z" level=info msg="StopPodSandbox for \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\"" Aug 13 07:18:39.445539 containerd[1462]: time="2025-08-13T07:18:39.444250101Z" level=info msg="Ensure that sandbox 5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726 in task-service has been cleanup successfully" Aug 13 07:18:39.462009 kubelet[2602]: I0813 07:18:39.461969 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Aug 13 07:18:39.467256 containerd[1462]: time="2025-08-13T07:18:39.467206452Z" level=info msg="StopPodSandbox for \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\"" Aug 13 07:18:39.468168 containerd[1462]: time="2025-08-13T07:18:39.467805981Z" level=info msg="Ensure that sandbox 59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8 in task-service has been cleanup successfully" Aug 13 07:18:39.472311 kubelet[2602]: I0813 07:18:39.472263 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Aug 13 07:18:39.476085 containerd[1462]: time="2025-08-13T07:18:39.475892384Z" level=info msg="StopPodSandbox for \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\"" Aug 13 07:18:39.477013 containerd[1462]: time="2025-08-13T07:18:39.476977219Z" level=info msg="Ensure that sandbox 711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60 in task-service has been cleanup successfully" Aug 13 07:18:39.482072 kubelet[2602]: I0813 07:18:39.482036 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Aug 13 07:18:39.485939 containerd[1462]: time="2025-08-13T07:18:39.485507691Z" level=info msg="StopPodSandbox for \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\"" Aug 13 07:18:39.488882 kubelet[2602]: I0813 07:18:39.488840 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Aug 13 07:18:39.490052 containerd[1462]: time="2025-08-13T07:18:39.489979344Z" level=info msg="Ensure that sandbox e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a in task-service has been cleanup successfully" Aug 13 07:18:39.499357 containerd[1462]: time="2025-08-13T07:18:39.499306578Z" level=info msg="StopPodSandbox for \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\"" Aug 13 07:18:39.500139 containerd[1462]: time="2025-08-13T07:18:39.499582330Z" level=info msg="Ensure that sandbox cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb in task-service has been cleanup successfully" Aug 13 07:18:39.504986 kubelet[2602]: I0813 07:18:39.504944 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Aug 13 07:18:39.507550 containerd[1462]: time="2025-08-13T07:18:39.506191596Z" level=info msg="StopPodSandbox for \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\"" Aug 13 07:18:39.507550 containerd[1462]: time="2025-08-13T07:18:39.506466118Z" level=info msg="Ensure that sandbox df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423 in task-service has been cleanup successfully" Aug 13 07:18:39.514736 kubelet[2602]: I0813 07:18:39.514601 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Aug 13 07:18:39.517305 containerd[1462]: time="2025-08-13T07:18:39.517191842Z" level=info msg="StopPodSandbox for \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\"" Aug 13 07:18:39.517981 containerd[1462]: time="2025-08-13T07:18:39.517429690Z" level=info msg="Ensure that sandbox a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a in task-service has been cleanup successfully" Aug 13 07:18:39.670428 containerd[1462]: time="2025-08-13T07:18:39.670365489Z" level=error msg="StopPodSandbox for \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\" failed" error="failed to destroy network for sandbox \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:39.670988 kubelet[2602]: E0813 07:18:39.670945 2602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Aug 13 07:18:39.671121 kubelet[2602]: E0813 07:18:39.671039 2602 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726"} Aug 13 07:18:39.671186 kubelet[2602]: E0813 07:18:39.671121 2602 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d2864426-4b9a-4a74-b95d-9856eb5042a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:39.671186 kubelet[2602]: E0813 07:18:39.671159 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d2864426-4b9a-4a74-b95d-9856eb5042a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z22sd" podUID="d2864426-4b9a-4a74-b95d-9856eb5042a1" Aug 13 07:18:39.685916 containerd[1462]: time="2025-08-13T07:18:39.685732661Z" level=error msg="StopPodSandbox for \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\" failed" error="failed to destroy network for sandbox \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:39.688302 kubelet[2602]: E0813 07:18:39.688091 2602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Aug 13 07:18:39.688302 kubelet[2602]: E0813 07:18:39.688159 2602 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8"} Aug 13 07:18:39.688302 kubelet[2602]: E0813 07:18:39.688209 2602 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"84e92942-4591-4943-868f-92a2efe7e6af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:39.688302 kubelet[2602]: E0813 07:18:39.688248 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"84e92942-4591-4943-868f-92a2efe7e6af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s4zz2" podUID="84e92942-4591-4943-868f-92a2efe7e6af" Aug 13 07:18:39.714073 containerd[1462]: time="2025-08-13T07:18:39.713694167Z" level=error msg="StopPodSandbox for \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\" failed" error="failed to destroy network for sandbox \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:39.714812 kubelet[2602]: E0813 07:18:39.714763 2602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Aug 13 07:18:39.715028 kubelet[2602]: E0813 07:18:39.714999 2602 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a"} Aug 13 07:18:39.715612 kubelet[2602]: E0813 07:18:39.715172 2602 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4211a64-a195-4abd-8c5a-5097d18bfd52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:39.715612 kubelet[2602]: E0813 07:18:39.715219 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4211a64-a195-4abd-8c5a-5097d18bfd52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65577c7dd-fkmsm" podUID="c4211a64-a195-4abd-8c5a-5097d18bfd52" Aug 13 07:18:39.717105 containerd[1462]: time="2025-08-13T07:18:39.717058755Z" level=error msg="StopPodSandbox for \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\" failed" error="failed to destroy network for sandbox \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:39.717473 kubelet[2602]: E0813 07:18:39.717432 2602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Aug 13 07:18:39.717694 kubelet[2602]: E0813 07:18:39.717669 2602 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb"} Aug 13 07:18:39.717915 kubelet[2602]: E0813 07:18:39.717814 2602 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8937aae4-009b-4f60-9764-2d5d28342995\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:39.717915 kubelet[2602]: E0813 07:18:39.717867 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8937aae4-009b-4f60-9764-2d5d28342995\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fb794d684-84gfw" podUID="8937aae4-009b-4f60-9764-2d5d28342995" Aug 13 07:18:39.721502 containerd[1462]: time="2025-08-13T07:18:39.720942983Z" level=error msg="StopPodSandbox for \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\" failed" error="failed to destroy network for sandbox \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:39.721955 kubelet[2602]: E0813 07:18:39.721777 2602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Aug 13 07:18:39.721955 kubelet[2602]: E0813 07:18:39.721829 2602 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60"} Aug 13 07:18:39.721955 kubelet[2602]: E0813 07:18:39.721871 2602 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d0cf14fd-3d11-43c2-a719-49dbd30906de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:39.721955 kubelet[2602]: E0813 07:18:39.721903 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d0cf14fd-3d11-43c2-a719-49dbd30906de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-bbxp6" podUID="d0cf14fd-3d11-43c2-a719-49dbd30906de" Aug 13 07:18:39.722860 containerd[1462]: time="2025-08-13T07:18:39.722798957Z" level=error msg="StopPodSandbox for \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\" failed" error="failed to destroy network for sandbox \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:39.723254 kubelet[2602]: E0813 07:18:39.723086 2602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Aug 13 07:18:39.723254 kubelet[2602]: E0813 07:18:39.723137 2602 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423"} Aug 13 07:18:39.723254 kubelet[2602]: E0813 07:18:39.723180 2602 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"454544c9-e57d-4404-ae95-88b611efc21a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:39.723254 kubelet[2602]: E0813 07:18:39.723212 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"454544c9-e57d-4404-ae95-88b611efc21a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fbc6d7cb9-xtnlk" podUID="454544c9-e57d-4404-ae95-88b611efc21a" Aug 13 07:18:39.727688 containerd[1462]: time="2025-08-13T07:18:39.727637211Z" level=error msg="StopPodSandbox for \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\" failed" error="failed to destroy network for sandbox \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:39.728157 kubelet[2602]: E0813 07:18:39.727914 2602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Aug 13 07:18:39.728157 kubelet[2602]: E0813 07:18:39.727975 2602 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a"} Aug 13 07:18:39.728157 kubelet[2602]: E0813 07:18:39.728019 2602 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9995649e-a9c2-4dd0-ab3a-469f68507e9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:39.728157 kubelet[2602]: E0813 07:18:39.728057 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9995649e-a9c2-4dd0-ab3a-469f68507e9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vmnnx" podUID="9995649e-a9c2-4dd0-ab3a-469f68507e9a" Aug 13 07:18:39.763745 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb-shm.mount: Deactivated successfully. Aug 13 07:18:39.823035 containerd[1462]: time="2025-08-13T07:18:39.822973084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fb794d684-rjd8h,Uid:b8b0bd01-1905-4ea2-9587-6ddd1435f3f6,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:18:39.974641 containerd[1462]: time="2025-08-13T07:18:39.972590649Z" level=error msg="Failed to destroy network for sandbox \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:39.976219 containerd[1462]: time="2025-08-13T07:18:39.975934608Z" level=error msg="encountered an error cleaning up failed sandbox \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:39.976219 containerd[1462]: time="2025-08-13T07:18:39.976101879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fb794d684-rjd8h,Uid:b8b0bd01-1905-4ea2-9587-6ddd1435f3f6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:39.981891 kubelet[2602]: E0813 07:18:39.978864 2602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:39.981891 kubelet[2602]: E0813 07:18:39.978952 2602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fb794d684-rjd8h" Aug 13 07:18:39.981891 kubelet[2602]: E0813 07:18:39.978987 2602 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fb794d684-rjd8h" Aug 13 07:18:39.982556 kubelet[2602]: E0813 07:18:39.979056 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fb794d684-rjd8h_calico-apiserver(b8b0bd01-1905-4ea2-9587-6ddd1435f3f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fb794d684-rjd8h_calico-apiserver(b8b0bd01-1905-4ea2-9587-6ddd1435f3f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fb794d684-rjd8h" podUID="b8b0bd01-1905-4ea2-9587-6ddd1435f3f6" Aug 13 07:18:39.989391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0-shm.mount: Deactivated successfully. Aug 13 07:18:40.518214 kubelet[2602]: I0813 07:18:40.518183 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Aug 13 07:18:40.521099 containerd[1462]: time="2025-08-13T07:18:40.519227963Z" level=info msg="StopPodSandbox for \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\"" Aug 13 07:18:40.521099 containerd[1462]: time="2025-08-13T07:18:40.519465205Z" level=info msg="Ensure that sandbox 8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0 in task-service has been cleanup successfully" Aug 13 07:18:40.569299 containerd[1462]: time="2025-08-13T07:18:40.569223248Z" level=error msg="StopPodSandbox for \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\" failed" error="failed to destroy network for sandbox \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:40.569618 kubelet[2602]: E0813 07:18:40.569557 2602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Aug 13 07:18:40.569738 kubelet[2602]: E0813 07:18:40.569636 2602 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0"} Aug 13 07:18:40.569738 kubelet[2602]: E0813 07:18:40.569694 2602 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8b0bd01-1905-4ea2-9587-6ddd1435f3f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:40.569916 kubelet[2602]: E0813 07:18:40.569733 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8b0bd01-1905-4ea2-9587-6ddd1435f3f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fb794d684-rjd8h" podUID="b8b0bd01-1905-4ea2-9587-6ddd1435f3f6" Aug 13 07:18:45.509737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2285770053.mount: Deactivated successfully. Aug 13 07:18:45.547990 containerd[1462]: time="2025-08-13T07:18:45.547928939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:45.549402 containerd[1462]: time="2025-08-13T07:18:45.549341086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:18:45.550666 containerd[1462]: time="2025-08-13T07:18:45.550580972Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:45.554618 containerd[1462]: time="2025-08-13T07:18:45.554571472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:45.555620 containerd[1462]: time="2025-08-13T07:18:45.555574492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.137458387s" Aug 13 07:18:45.555620 containerd[1462]: time="2025-08-13T07:18:45.555615643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:18:45.579401 containerd[1462]: time="2025-08-13T07:18:45.579340677Z" level=info msg="CreateContainer within sandbox \"f71b3c8c450cd7668c468094f02e59ff8167cfc34682b782a17f6d0022db8217\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:18:45.611195 containerd[1462]: time="2025-08-13T07:18:45.611079146Z" level=info msg="CreateContainer within sandbox \"f71b3c8c450cd7668c468094f02e59ff8167cfc34682b782a17f6d0022db8217\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d9a2eff74ddb6d6b5bfcde75387cf7c6fa7fe0bd77dea39f3b27a66826cd6ff7\"" Aug 13 07:18:45.611861 containerd[1462]: time="2025-08-13T07:18:45.611762457Z" level=info msg="StartContainer for \"d9a2eff74ddb6d6b5bfcde75387cf7c6fa7fe0bd77dea39f3b27a66826cd6ff7\"" Aug 13 07:18:45.653109 systemd[1]: Started cri-containerd-d9a2eff74ddb6d6b5bfcde75387cf7c6fa7fe0bd77dea39f3b27a66826cd6ff7.scope - libcontainer container d9a2eff74ddb6d6b5bfcde75387cf7c6fa7fe0bd77dea39f3b27a66826cd6ff7. Aug 13 07:18:45.699046 containerd[1462]: time="2025-08-13T07:18:45.698988422Z" level=info msg="StartContainer for \"d9a2eff74ddb6d6b5bfcde75387cf7c6fa7fe0bd77dea39f3b27a66826cd6ff7\" returns successfully" Aug 13 07:18:45.829336 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:18:45.829576 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:18:45.978035 containerd[1462]: time="2025-08-13T07:18:45.977962407Z" level=info msg="StopPodSandbox for \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\"" Aug 13 07:18:46.173718 containerd[1462]: 2025-08-13 07:18:46.094 [INFO][3789] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Aug 13 07:18:46.173718 containerd[1462]: 2025-08-13 07:18:46.095 [INFO][3789] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" iface="eth0" netns="/var/run/netns/cni-a82df872-b1ce-1290-58da-72cd66a66ed1" Aug 13 07:18:46.173718 containerd[1462]: 2025-08-13 07:18:46.095 [INFO][3789] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" iface="eth0" netns="/var/run/netns/cni-a82df872-b1ce-1290-58da-72cd66a66ed1" Aug 13 07:18:46.173718 containerd[1462]: 2025-08-13 07:18:46.097 [INFO][3789] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" iface="eth0" netns="/var/run/netns/cni-a82df872-b1ce-1290-58da-72cd66a66ed1" Aug 13 07:18:46.173718 containerd[1462]: 2025-08-13 07:18:46.097 [INFO][3789] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Aug 13 07:18:46.173718 containerd[1462]: 2025-08-13 07:18:46.097 [INFO][3789] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Aug 13 07:18:46.173718 containerd[1462]: 2025-08-13 07:18:46.151 [INFO][3800] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" HandleID="k8s-pod-network.a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--65577c7dd--fkmsm-eth0" Aug 13 07:18:46.173718 containerd[1462]: 2025-08-13 07:18:46.152 [INFO][3800] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:46.173718 containerd[1462]: 2025-08-13 07:18:46.152 [INFO][3800] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:46.173718 containerd[1462]: 2025-08-13 07:18:46.163 [WARNING][3800] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" HandleID="k8s-pod-network.a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--65577c7dd--fkmsm-eth0" Aug 13 07:18:46.173718 containerd[1462]: 2025-08-13 07:18:46.163 [INFO][3800] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" HandleID="k8s-pod-network.a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--65577c7dd--fkmsm-eth0" Aug 13 07:18:46.173718 containerd[1462]: 2025-08-13 07:18:46.166 [INFO][3800] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:46.173718 containerd[1462]: 2025-08-13 07:18:46.171 [INFO][3789] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Aug 13 07:18:46.174815 containerd[1462]: time="2025-08-13T07:18:46.173876702Z" level=info msg="TearDown network for sandbox \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\" successfully" Aug 13 07:18:46.174815 containerd[1462]: time="2025-08-13T07:18:46.173933488Z" level=info msg="StopPodSandbox for \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\" returns successfully" Aug 13 07:18:46.215053 kubelet[2602]: I0813 07:18:46.213607 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4211a64-a195-4abd-8c5a-5097d18bfd52-whisker-ca-bundle\") pod \"c4211a64-a195-4abd-8c5a-5097d18bfd52\" (UID: \"c4211a64-a195-4abd-8c5a-5097d18bfd52\") " Aug 13 07:18:46.215053 kubelet[2602]: I0813 07:18:46.213671 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4211a64-a195-4abd-8c5a-5097d18bfd52-whisker-backend-key-pair\") pod \"c4211a64-a195-4abd-8c5a-5097d18bfd52\" (UID: \"c4211a64-a195-4abd-8c5a-5097d18bfd52\") " Aug 13 07:18:46.215053 kubelet[2602]: I0813 07:18:46.213711 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8f7g\" (UniqueName: \"kubernetes.io/projected/c4211a64-a195-4abd-8c5a-5097d18bfd52-kube-api-access-b8f7g\") pod \"c4211a64-a195-4abd-8c5a-5097d18bfd52\" (UID: \"c4211a64-a195-4abd-8c5a-5097d18bfd52\") " Aug 13 07:18:46.220096 kubelet[2602]: I0813 07:18:46.219856 2602 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4211a64-a195-4abd-8c5a-5097d18bfd52-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c4211a64-a195-4abd-8c5a-5097d18bfd52" (UID: "c4211a64-a195-4abd-8c5a-5097d18bfd52"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:18:46.230643 kubelet[2602]: I0813 07:18:46.230580 2602 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4211a64-a195-4abd-8c5a-5097d18bfd52-kube-api-access-b8f7g" (OuterVolumeSpecName: "kube-api-access-b8f7g") pod "c4211a64-a195-4abd-8c5a-5097d18bfd52" (UID: "c4211a64-a195-4abd-8c5a-5097d18bfd52"). InnerVolumeSpecName "kube-api-access-b8f7g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:18:46.232772 kubelet[2602]: I0813 07:18:46.232727 2602 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4211a64-a195-4abd-8c5a-5097d18bfd52-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c4211a64-a195-4abd-8c5a-5097d18bfd52" (UID: "c4211a64-a195-4abd-8c5a-5097d18bfd52"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 07:18:46.243823 systemd[1]: Removed slice kubepods-besteffort-podc4211a64_a195_4abd_8c5a_5097d18bfd52.slice - libcontainer container kubepods-besteffort-podc4211a64_a195_4abd_8c5a_5097d18bfd52.slice. Aug 13 07:18:46.315152 kubelet[2602]: I0813 07:18:46.315066 2602 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4211a64-a195-4abd-8c5a-5097d18bfd52-whisker-backend-key-pair\") on node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 07:18:46.315152 kubelet[2602]: I0813 07:18:46.315159 2602 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b8f7g\" (UniqueName: \"kubernetes.io/projected/c4211a64-a195-4abd-8c5a-5097d18bfd52-kube-api-access-b8f7g\") on node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 07:18:46.315152 kubelet[2602]: I0813 07:18:46.315180 2602 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4211a64-a195-4abd-8c5a-5097d18bfd52-whisker-ca-bundle\") on node \"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 07:18:46.510879 systemd[1]: run-netns-cni\x2da82df872\x2db1ce\x2d1290\x2d58da\x2d72cd66a66ed1.mount: Deactivated successfully. Aug 13 07:18:46.511060 systemd[1]: var-lib-kubelet-pods-c4211a64\x2da195\x2d4abd\x2d8c5a\x2d5097d18bfd52-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db8f7g.mount: Deactivated successfully. Aug 13 07:18:46.514621 systemd[1]: var-lib-kubelet-pods-c4211a64\x2da195\x2d4abd\x2d8c5a\x2d5097d18bfd52-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:18:46.589636 systemd[1]: run-containerd-runc-k8s.io-d9a2eff74ddb6d6b5bfcde75387cf7c6fa7fe0bd77dea39f3b27a66826cd6ff7-runc.bzyVdk.mount: Deactivated successfully. Aug 13 07:18:46.592487 kubelet[2602]: I0813 07:18:46.592409 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fhw5t" podStartSLOduration=1.986662699 podStartE2EDuration="18.59238476s" podCreationTimestamp="2025-08-13 07:18:28 +0000 UTC" firstStartedPulling="2025-08-13 07:18:28.9509784 +0000 UTC m=+22.980660714" lastFinishedPulling="2025-08-13 07:18:45.556700454 +0000 UTC m=+39.586382775" observedRunningTime="2025-08-13 07:18:46.589527161 +0000 UTC m=+40.619209483" watchObservedRunningTime="2025-08-13 07:18:46.59238476 +0000 UTC m=+40.622067090" Aug 13 07:18:46.692575 systemd[1]: Created slice kubepods-besteffort-podfa13eda0_592a_45b6_a0f6_31bf00b0f8d8.slice - libcontainer container kubepods-besteffort-podfa13eda0_592a_45b6_a0f6_31bf00b0f8d8.slice. Aug 13 07:18:46.718641 kubelet[2602]: I0813 07:18:46.718586 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fa13eda0-592a-45b6-a0f6-31bf00b0f8d8-whisker-backend-key-pair\") pod \"whisker-7c9bbcb7b4-lxb4l\" (UID: \"fa13eda0-592a-45b6-a0f6-31bf00b0f8d8\") " pod="calico-system/whisker-7c9bbcb7b4-lxb4l" Aug 13 07:18:46.718823 kubelet[2602]: I0813 07:18:46.718667 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa13eda0-592a-45b6-a0f6-31bf00b0f8d8-whisker-ca-bundle\") pod \"whisker-7c9bbcb7b4-lxb4l\" (UID: \"fa13eda0-592a-45b6-a0f6-31bf00b0f8d8\") " pod="calico-system/whisker-7c9bbcb7b4-lxb4l" Aug 13 07:18:46.718823 kubelet[2602]: I0813 07:18:46.718696 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbhpc\" (UniqueName: \"kubernetes.io/projected/fa13eda0-592a-45b6-a0f6-31bf00b0f8d8-kube-api-access-fbhpc\") pod \"whisker-7c9bbcb7b4-lxb4l\" (UID: \"fa13eda0-592a-45b6-a0f6-31bf00b0f8d8\") " pod="calico-system/whisker-7c9bbcb7b4-lxb4l" Aug 13 07:18:46.997973 containerd[1462]: time="2025-08-13T07:18:46.997908841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c9bbcb7b4-lxb4l,Uid:fa13eda0-592a-45b6-a0f6-31bf00b0f8d8,Namespace:calico-system,Attempt:0,}" Aug 13 07:18:47.165549 systemd-networkd[1377]: cali873b64e15ae: Link UP Aug 13 07:18:47.166678 systemd-networkd[1377]: cali873b64e15ae: Gained carrier Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.047 [INFO][3848] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.064 [INFO][3848] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--7c9bbcb7b4--lxb4l-eth0 whisker-7c9bbcb7b4- calico-system fa13eda0-592a-45b6-a0f6-31bf00b0f8d8 880 0 2025-08-13 07:18:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c9bbcb7b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal whisker-7c9bbcb7b4-lxb4l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali873b64e15ae [] [] }} ContainerID="8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" Namespace="calico-system" Pod="whisker-7c9bbcb7b4-lxb4l" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--7c9bbcb7b4--lxb4l-" Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.064 [INFO][3848] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" Namespace="calico-system" Pod="whisker-7c9bbcb7b4-lxb4l" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--7c9bbcb7b4--lxb4l-eth0" Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.104 [INFO][3859] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" HandleID="k8s-pod-network.8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--7c9bbcb7b4--lxb4l-eth0" Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.104 [INFO][3859] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" HandleID="k8s-pod-network.8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--7c9bbcb7b4--lxb4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f100), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", "pod":"whisker-7c9bbcb7b4-lxb4l", "timestamp":"2025-08-13 07:18:47.10472548 +0000 UTC"}, Hostname:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.105 [INFO][3859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.105 [INFO][3859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.105 [INFO][3859] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal' Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.114 [INFO][3859] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.120 [INFO][3859] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.127 [INFO][3859] ipam/ipam.go 511: Trying affinity for 192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.130 [INFO][3859] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.133 [INFO][3859] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.133 [INFO][3859] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.128/26 handle="k8s-pod-network.8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.135 [INFO][3859] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324 Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.142 [INFO][3859] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.128/26 handle="k8s-pod-network.8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.152 [INFO][3859] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.129/26] block=192.168.35.128/26 handle="k8s-pod-network.8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.152 [INFO][3859] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.129/26] handle="k8s-pod-network.8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.152 [INFO][3859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:47.190735 containerd[1462]: 2025-08-13 07:18:47.152 [INFO][3859] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.129/26] IPv6=[] ContainerID="8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" HandleID="k8s-pod-network.8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--7c9bbcb7b4--lxb4l-eth0" Aug 13 07:18:47.192187 containerd[1462]: 2025-08-13 07:18:47.154 [INFO][3848] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" Namespace="calico-system" Pod="whisker-7c9bbcb7b4-lxb4l" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--7c9bbcb7b4--lxb4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--7c9bbcb7b4--lxb4l-eth0", GenerateName:"whisker-7c9bbcb7b4-", Namespace:"calico-system", SelfLink:"", UID:"fa13eda0-592a-45b6-a0f6-31bf00b0f8d8", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c9bbcb7b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-7c9bbcb7b4-lxb4l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.35.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali873b64e15ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:47.192187 containerd[1462]: 2025-08-13 07:18:47.154 [INFO][3848] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.129/32] ContainerID="8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" Namespace="calico-system" Pod="whisker-7c9bbcb7b4-lxb4l" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--7c9bbcb7b4--lxb4l-eth0" Aug 13 07:18:47.192187 containerd[1462]: 2025-08-13 07:18:47.154 [INFO][3848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali873b64e15ae ContainerID="8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" Namespace="calico-system" Pod="whisker-7c9bbcb7b4-lxb4l" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--7c9bbcb7b4--lxb4l-eth0" Aug 13 07:18:47.192187 containerd[1462]: 2025-08-13 07:18:47.167 [INFO][3848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" Namespace="calico-system" Pod="whisker-7c9bbcb7b4-lxb4l" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--7c9bbcb7b4--lxb4l-eth0" Aug 13 07:18:47.192187 containerd[1462]: 2025-08-13 07:18:47.168 [INFO][3848] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" Namespace="calico-system" Pod="whisker-7c9bbcb7b4-lxb4l" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--7c9bbcb7b4--lxb4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--7c9bbcb7b4--lxb4l-eth0", GenerateName:"whisker-7c9bbcb7b4-", Namespace:"calico-system", SelfLink:"", UID:"fa13eda0-592a-45b6-a0f6-31bf00b0f8d8", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c9bbcb7b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324", Pod:"whisker-7c9bbcb7b4-lxb4l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.35.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali873b64e15ae", MAC:"1e:f9:0d:67:65:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:47.192187 containerd[1462]: 2025-08-13 07:18:47.186 [INFO][3848] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324" Namespace="calico-system" Pod="whisker-7c9bbcb7b4-lxb4l" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--7c9bbcb7b4--lxb4l-eth0" Aug 13 07:18:47.218635 containerd[1462]: time="2025-08-13T07:18:47.218447545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:47.220531 containerd[1462]: time="2025-08-13T07:18:47.219154285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:47.220531 containerd[1462]: time="2025-08-13T07:18:47.219200282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:47.220531 containerd[1462]: time="2025-08-13T07:18:47.219325823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:47.246846 systemd[1]: Started cri-containerd-8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324.scope - libcontainer container 8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324. Aug 13 07:18:47.310048 containerd[1462]: time="2025-08-13T07:18:47.310001491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c9bbcb7b4-lxb4l,Uid:fa13eda0-592a-45b6-a0f6-31bf00b0f8d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324\"" Aug 13 07:18:47.312631 containerd[1462]: time="2025-08-13T07:18:47.312590604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:18:47.605902 systemd[1]: run-containerd-runc-k8s.io-d9a2eff74ddb6d6b5bfcde75387cf7c6fa7fe0bd77dea39f3b27a66826cd6ff7-runc.qOgqHi.mount: Deactivated successfully. Aug 13 07:18:48.215290 kubelet[2602]: I0813 07:18:48.215226 2602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4211a64-a195-4abd-8c5a-5097d18bfd52" path="/var/lib/kubelet/pods/c4211a64-a195-4abd-8c5a-5097d18bfd52/volumes" Aug 13 07:18:48.453936 containerd[1462]: time="2025-08-13T07:18:48.453861960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:48.455477 containerd[1462]: time="2025-08-13T07:18:48.455395042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:18:48.457199 containerd[1462]: time="2025-08-13T07:18:48.457127967Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:48.460349 containerd[1462]: time="2025-08-13T07:18:48.460274917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:48.461483 containerd[1462]: time="2025-08-13T07:18:48.461294124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.148652593s" Aug 13 07:18:48.461483 containerd[1462]: time="2025-08-13T07:18:48.461342666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:18:48.465560 containerd[1462]: time="2025-08-13T07:18:48.465335392Z" level=info msg="CreateContainer within sandbox \"8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:18:48.472924 systemd-networkd[1377]: cali873b64e15ae: Gained IPv6LL Aug 13 07:18:48.485531 containerd[1462]: time="2025-08-13T07:18:48.485351685Z" level=info msg="CreateContainer within sandbox \"8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"839e9376a4f59eb8eb549d5ad66930ae39930cfc202bbce5684121add05e3e95\"" Aug 13 07:18:48.486676 containerd[1462]: time="2025-08-13T07:18:48.486443240Z" level=info msg="StartContainer for \"839e9376a4f59eb8eb549d5ad66930ae39930cfc202bbce5684121add05e3e95\"" Aug 13 07:18:48.538725 systemd[1]: Started cri-containerd-839e9376a4f59eb8eb549d5ad66930ae39930cfc202bbce5684121add05e3e95.scope - libcontainer container 839e9376a4f59eb8eb549d5ad66930ae39930cfc202bbce5684121add05e3e95. Aug 13 07:18:48.601549 containerd[1462]: time="2025-08-13T07:18:48.601465367Z" level=info msg="StartContainer for \"839e9376a4f59eb8eb549d5ad66930ae39930cfc202bbce5684121add05e3e95\" returns successfully" Aug 13 07:18:48.603572 containerd[1462]: time="2025-08-13T07:18:48.603249955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:18:50.778334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3497862220.mount: Deactivated successfully. Aug 13 07:18:50.799237 containerd[1462]: time="2025-08-13T07:18:50.799167135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:50.800556 containerd[1462]: time="2025-08-13T07:18:50.800457013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:18:50.802220 containerd[1462]: time="2025-08-13T07:18:50.802147518Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:50.805564 containerd[1462]: time="2025-08-13T07:18:50.805491097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:50.807166 containerd[1462]: time="2025-08-13T07:18:50.806700044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.203402443s" Aug 13 07:18:50.807166 containerd[1462]: time="2025-08-13T07:18:50.806753632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:18:50.810291 containerd[1462]: time="2025-08-13T07:18:50.809933125Z" level=info msg="CreateContainer within sandbox \"8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:18:50.831066 containerd[1462]: time="2025-08-13T07:18:50.830965434Z" level=info msg="CreateContainer within sandbox \"8ffe3127485a3a61013a9fbef898871d5b976993b62363ec0f2379cf8c9ab324\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"756fa05a5edc392c72340474f67160861d4e96eaa874bc4b25589da74f8063eb\"" Aug 13 07:18:50.831921 containerd[1462]: time="2025-08-13T07:18:50.831779831Z" level=info msg="StartContainer for \"756fa05a5edc392c72340474f67160861d4e96eaa874bc4b25589da74f8063eb\"" Aug 13 07:18:50.860608 ntpd[1430]: Listen normally on 7 cali873b64e15ae [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 07:18:50.866619 ntpd[1430]: 13 Aug 07:18:50 ntpd[1430]: Listen normally on 7 cali873b64e15ae [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 07:18:50.894823 systemd[1]: Started cri-containerd-756fa05a5edc392c72340474f67160861d4e96eaa874bc4b25589da74f8063eb.scope - libcontainer container 756fa05a5edc392c72340474f67160861d4e96eaa874bc4b25589da74f8063eb. Aug 13 07:18:50.955033 containerd[1462]: time="2025-08-13T07:18:50.954959735Z" level=info msg="StartContainer for \"756fa05a5edc392c72340474f67160861d4e96eaa874bc4b25589da74f8063eb\" returns successfully" Aug 13 07:18:51.212583 containerd[1462]: time="2025-08-13T07:18:51.212323507Z" level=info msg="StopPodSandbox for \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\"" Aug 13 07:18:51.213220 containerd[1462]: time="2025-08-13T07:18:51.212325368Z" level=info msg="StopPodSandbox for \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\"" Aug 13 07:18:51.478949 containerd[1462]: 2025-08-13 07:18:51.314 [INFO][4175] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Aug 13 07:18:51.478949 containerd[1462]: 2025-08-13 07:18:51.319 [INFO][4175] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" iface="eth0" netns="/var/run/netns/cni-34a0db6e-7568-b5f8-02f6-a10ff031b0f4" Aug 13 07:18:51.478949 containerd[1462]: 2025-08-13 07:18:51.320 [INFO][4175] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" iface="eth0" netns="/var/run/netns/cni-34a0db6e-7568-b5f8-02f6-a10ff031b0f4" Aug 13 07:18:51.478949 containerd[1462]: 2025-08-13 07:18:51.330 [INFO][4175] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" iface="eth0" netns="/var/run/netns/cni-34a0db6e-7568-b5f8-02f6-a10ff031b0f4" Aug 13 07:18:51.478949 containerd[1462]: 2025-08-13 07:18:51.331 [INFO][4175] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Aug 13 07:18:51.478949 containerd[1462]: 2025-08-13 07:18:51.331 [INFO][4175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Aug 13 07:18:51.478949 containerd[1462]: 2025-08-13 07:18:51.450 [INFO][4193] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" HandleID="k8s-pod-network.df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:18:51.478949 containerd[1462]: 2025-08-13 07:18:51.452 [INFO][4193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:51.478949 containerd[1462]: 2025-08-13 07:18:51.452 [INFO][4193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:51.478949 containerd[1462]: 2025-08-13 07:18:51.469 [WARNING][4193] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" HandleID="k8s-pod-network.df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:18:51.478949 containerd[1462]: 2025-08-13 07:18:51.469 [INFO][4193] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" HandleID="k8s-pod-network.df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:18:51.478949 containerd[1462]: 2025-08-13 07:18:51.472 [INFO][4193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:51.478949 containerd[1462]: 2025-08-13 07:18:51.477 [INFO][4175] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Aug 13 07:18:51.484012 containerd[1462]: time="2025-08-13T07:18:51.481834572Z" level=info msg="TearDown network for sandbox \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\" successfully" Aug 13 07:18:51.484012 containerd[1462]: time="2025-08-13T07:18:51.481886474Z" level=info msg="StopPodSandbox for \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\" returns successfully" Aug 13 07:18:51.487559 containerd[1462]: time="2025-08-13T07:18:51.485471821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fbc6d7cb9-xtnlk,Uid:454544c9-e57d-4404-ae95-88b611efc21a,Namespace:calico-system,Attempt:1,}" Aug 13 07:18:51.487307 systemd[1]: run-netns-cni\x2d34a0db6e\x2d7568\x2db5f8\x2d02f6\x2da10ff031b0f4.mount: Deactivated successfully. Aug 13 07:18:51.521617 containerd[1462]: 2025-08-13 07:18:51.335 [INFO][4176] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Aug 13 07:18:51.521617 containerd[1462]: 2025-08-13 07:18:51.335 [INFO][4176] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" iface="eth0" netns="/var/run/netns/cni-cb486c19-1c45-1afe-a253-544314e152b8" Aug 13 07:18:51.521617 containerd[1462]: 2025-08-13 07:18:51.336 [INFO][4176] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" iface="eth0" netns="/var/run/netns/cni-cb486c19-1c45-1afe-a253-544314e152b8" Aug 13 07:18:51.521617 containerd[1462]: 2025-08-13 07:18:51.337 [INFO][4176] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" iface="eth0" netns="/var/run/netns/cni-cb486c19-1c45-1afe-a253-544314e152b8" Aug 13 07:18:51.521617 containerd[1462]: 2025-08-13 07:18:51.337 [INFO][4176] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Aug 13 07:18:51.521617 containerd[1462]: 2025-08-13 07:18:51.337 [INFO][4176] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Aug 13 07:18:51.521617 containerd[1462]: 2025-08-13 07:18:51.454 [INFO][4192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" HandleID="k8s-pod-network.59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:18:51.521617 containerd[1462]: 2025-08-13 07:18:51.455 [INFO][4192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:51.521617 containerd[1462]: 2025-08-13 07:18:51.474 [INFO][4192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:51.521617 containerd[1462]: 2025-08-13 07:18:51.499 [WARNING][4192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" HandleID="k8s-pod-network.59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:18:51.521617 containerd[1462]: 2025-08-13 07:18:51.499 [INFO][4192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" HandleID="k8s-pod-network.59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:18:51.521617 containerd[1462]: 2025-08-13 07:18:51.504 [INFO][4192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:51.521617 containerd[1462]: 2025-08-13 07:18:51.508 [INFO][4176] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Aug 13 07:18:51.525143 containerd[1462]: time="2025-08-13T07:18:51.524876089Z" level=info msg="TearDown network for sandbox \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\" successfully" Aug 13 07:18:51.525143 containerd[1462]: time="2025-08-13T07:18:51.524936880Z" level=info msg="StopPodSandbox for \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\" returns successfully" Aug 13 07:18:51.528107 containerd[1462]: time="2025-08-13T07:18:51.527726455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s4zz2,Uid:84e92942-4591-4943-868f-92a2efe7e6af,Namespace:kube-system,Attempt:1,}" Aug 13 07:18:51.542188 systemd[1]: run-netns-cni\x2dcb486c19\x2d1c45\x2d1afe\x2da253\x2d544314e152b8.mount: Deactivated successfully. Aug 13 07:18:51.619284 kubelet[2602]: I0813 07:18:51.618558 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7c9bbcb7b4-lxb4l" podStartSLOduration=2.121950858 podStartE2EDuration="5.618097254s" podCreationTimestamp="2025-08-13 07:18:46 +0000 UTC" firstStartedPulling="2025-08-13 07:18:47.311879991 +0000 UTC m=+41.341562309" lastFinishedPulling="2025-08-13 07:18:50.808026388 +0000 UTC m=+44.837708705" observedRunningTime="2025-08-13 07:18:51.616445614 +0000 UTC m=+45.646127942" watchObservedRunningTime="2025-08-13 07:18:51.618097254 +0000 UTC m=+45.647779581" Aug 13 07:18:51.945350 systemd-networkd[1377]: calibf5f020c0d2: Link UP Aug 13 07:18:51.949615 systemd-networkd[1377]: calibf5f020c0d2: Gained carrier Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.660 [INFO][4219] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.711 [INFO][4219] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0 calico-kube-controllers-6fbc6d7cb9- calico-system 454544c9-e57d-4404-ae95-88b611efc21a 906 0 2025-08-13 07:18:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fbc6d7cb9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal calico-kube-controllers-6fbc6d7cb9-xtnlk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibf5f020c0d2 [] [] }} ContainerID="4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" Namespace="calico-system" Pod="calico-kube-controllers-6fbc6d7cb9-xtnlk" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-" Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.712 [INFO][4219] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" Namespace="calico-system" Pod="calico-kube-controllers-6fbc6d7cb9-xtnlk" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.812 [INFO][4248] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" HandleID="k8s-pod-network.4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.820 [INFO][4248] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" HandleID="k8s-pod-network.4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122750), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", "pod":"calico-kube-controllers-6fbc6d7cb9-xtnlk", "timestamp":"2025-08-13 07:18:51.812391226 +0000 UTC"}, Hostname:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.820 [INFO][4248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.820 [INFO][4248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.820 [INFO][4248] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal' Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.852 [INFO][4248] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.870 [INFO][4248] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.885 [INFO][4248] ipam/ipam.go 511: Trying affinity for 192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.890 [INFO][4248] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.895 [INFO][4248] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.895 [INFO][4248] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.128/26 handle="k8s-pod-network.4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.900 [INFO][4248] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7 Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.909 [INFO][4248] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.128/26 handle="k8s-pod-network.4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.920 [INFO][4248] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.130/26] block=192.168.35.128/26 handle="k8s-pod-network.4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.921 [INFO][4248] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.130/26] handle="k8s-pod-network.4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.921 [INFO][4248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:52.004898 containerd[1462]: 2025-08-13 07:18:51.921 [INFO][4248] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.130/26] IPv6=[] ContainerID="4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" HandleID="k8s-pod-network.4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:18:52.015316 containerd[1462]: 2025-08-13 07:18:51.927 [INFO][4219] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" Namespace="calico-system" Pod="calico-kube-controllers-6fbc6d7cb9-xtnlk" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0", GenerateName:"calico-kube-controllers-6fbc6d7cb9-", Namespace:"calico-system", SelfLink:"", UID:"454544c9-e57d-4404-ae95-88b611efc21a", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fbc6d7cb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-6fbc6d7cb9-xtnlk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibf5f020c0d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:52.015316 containerd[1462]: 2025-08-13 07:18:51.927 [INFO][4219] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.130/32] ContainerID="4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" Namespace="calico-system" Pod="calico-kube-controllers-6fbc6d7cb9-xtnlk" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:18:52.015316 containerd[1462]: 2025-08-13 07:18:51.927 [INFO][4219] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf5f020c0d2 ContainerID="4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" Namespace="calico-system" Pod="calico-kube-controllers-6fbc6d7cb9-xtnlk" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:18:52.015316 containerd[1462]: 2025-08-13 07:18:51.951 [INFO][4219] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" Namespace="calico-system" Pod="calico-kube-controllers-6fbc6d7cb9-xtnlk" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:18:52.015316 containerd[1462]: 2025-08-13 07:18:51.959 [INFO][4219] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" Namespace="calico-system" Pod="calico-kube-controllers-6fbc6d7cb9-xtnlk" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0", GenerateName:"calico-kube-controllers-6fbc6d7cb9-", Namespace:"calico-system", SelfLink:"", UID:"454544c9-e57d-4404-ae95-88b611efc21a", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fbc6d7cb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7", Pod:"calico-kube-controllers-6fbc6d7cb9-xtnlk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibf5f020c0d2", MAC:"76:0e:87:32:a2:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:52.015316 containerd[1462]: 2025-08-13 07:18:51.997 [INFO][4219] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7" Namespace="calico-system" Pod="calico-kube-controllers-6fbc6d7cb9-xtnlk" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:18:52.067666 containerd[1462]: time="2025-08-13T07:18:52.067476959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:52.068290 containerd[1462]: time="2025-08-13T07:18:52.068051032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:52.068290 containerd[1462]: time="2025-08-13T07:18:52.068143552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:52.075676 containerd[1462]: time="2025-08-13T07:18:52.075242275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:52.106997 systemd-networkd[1377]: calic316141921e: Link UP Aug 13 07:18:52.111083 systemd-networkd[1377]: calic316141921e: Gained carrier Aug 13 07:18:52.160598 systemd[1]: Started cri-containerd-4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7.scope - libcontainer container 4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7. Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:51.644 [INFO][4229] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:51.700 [INFO][4229] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0 coredns-668d6bf9bc- kube-system 84e92942-4591-4943-868f-92a2efe7e6af 907 0 2025-08-13 07:18:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal coredns-668d6bf9bc-s4zz2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic316141921e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4zz2" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-" Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:51.700 [INFO][4229] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4zz2" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:51.829 [INFO][4245] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" HandleID="k8s-pod-network.dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:51.830 [INFO][4245] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" HandleID="k8s-pod-network.dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031fd50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", "pod":"coredns-668d6bf9bc-s4zz2", "timestamp":"2025-08-13 07:18:51.829792803 +0000 UTC"}, Hostname:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:51.830 [INFO][4245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:51.921 [INFO][4245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:51.921 [INFO][4245] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal' Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:51.957 [INFO][4245] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:51.972 [INFO][4245] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:51.997 [INFO][4245] ipam/ipam.go 511: Trying affinity for 192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:52.010 [INFO][4245] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:52.018 [INFO][4245] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:52.018 [INFO][4245] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.128/26 handle="k8s-pod-network.dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:52.025 [INFO][4245] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0 Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:52.045 [INFO][4245] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.128/26 handle="k8s-pod-network.dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:52.070 [INFO][4245] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.131/26] block=192.168.35.128/26 handle="k8s-pod-network.dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:52.071 [INFO][4245] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.131/26] handle="k8s-pod-network.dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:52.071 [INFO][4245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:52.167621 containerd[1462]: 2025-08-13 07:18:52.071 [INFO][4245] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.131/26] IPv6=[] ContainerID="dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" HandleID="k8s-pod-network.dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:18:52.171247 containerd[1462]: 2025-08-13 07:18:52.084 [INFO][4229] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4zz2" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"84e92942-4591-4943-868f-92a2efe7e6af", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-668d6bf9bc-s4zz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic316141921e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:52.171247 containerd[1462]: 2025-08-13 07:18:52.094 [INFO][4229] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.131/32] ContainerID="dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4zz2" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:18:52.171247 containerd[1462]: 2025-08-13 07:18:52.094 [INFO][4229] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic316141921e ContainerID="dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4zz2" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:18:52.171247 containerd[1462]: 2025-08-13 07:18:52.115 [INFO][4229] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4zz2" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:18:52.171247 containerd[1462]: 2025-08-13 07:18:52.115 [INFO][4229] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4zz2" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"84e92942-4591-4943-868f-92a2efe7e6af", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0", Pod:"coredns-668d6bf9bc-s4zz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic316141921e", MAC:"2a:89:0b:3e:d0:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:52.171247 containerd[1462]: 2025-08-13 07:18:52.152 [INFO][4229] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4zz2" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:18:52.215735 containerd[1462]: time="2025-08-13T07:18:52.213437690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:52.218911 containerd[1462]: time="2025-08-13T07:18:52.217578458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:52.218911 containerd[1462]: time="2025-08-13T07:18:52.218679880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:52.222116 containerd[1462]: time="2025-08-13T07:18:52.221331776Z" level=info msg="StopPodSandbox for \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\"" Aug 13 07:18:52.222814 containerd[1462]: time="2025-08-13T07:18:52.221462598Z" level=info msg="StopPodSandbox for \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\"" Aug 13 07:18:52.229644 containerd[1462]: time="2025-08-13T07:18:52.224119491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:52.310779 systemd[1]: Started cri-containerd-dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0.scope - libcontainer container dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0. Aug 13 07:18:52.431443 containerd[1462]: time="2025-08-13T07:18:52.431261842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s4zz2,Uid:84e92942-4591-4943-868f-92a2efe7e6af,Namespace:kube-system,Attempt:1,} returns sandbox id \"dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0\"" Aug 13 07:18:52.445733 containerd[1462]: time="2025-08-13T07:18:52.445685195Z" level=info msg="CreateContainer within sandbox \"dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:18:52.487416 containerd[1462]: time="2025-08-13T07:18:52.487269626Z" level=info msg="CreateContainer within sandbox \"dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d04b1fa3e09882ad78488fc56c03dfc3314d6a8d68467a1593057ddddaa1dc83\"" Aug 13 07:18:52.491370 containerd[1462]: time="2025-08-13T07:18:52.491067208Z" level=info msg="StartContainer for \"d04b1fa3e09882ad78488fc56c03dfc3314d6a8d68467a1593057ddddaa1dc83\"" Aug 13 07:18:52.495094 containerd[1462]: time="2025-08-13T07:18:52.493825039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fbc6d7cb9-xtnlk,Uid:454544c9-e57d-4404-ae95-88b611efc21a,Namespace:calico-system,Attempt:1,} returns sandbox id \"4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7\"" Aug 13 07:18:52.498073 containerd[1462]: time="2025-08-13T07:18:52.498015591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:18:52.546464 containerd[1462]: 2025-08-13 07:18:52.420 [INFO][4347] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Aug 13 07:18:52.546464 containerd[1462]: 2025-08-13 07:18:52.421 [INFO][4347] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" iface="eth0" netns="/var/run/netns/cni-daff393b-633f-de56-3e80-d40539e3f11c" Aug 13 07:18:52.546464 containerd[1462]: 2025-08-13 07:18:52.422 [INFO][4347] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" iface="eth0" netns="/var/run/netns/cni-daff393b-633f-de56-3e80-d40539e3f11c" Aug 13 07:18:52.546464 containerd[1462]: 2025-08-13 07:18:52.423 [INFO][4347] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" iface="eth0" netns="/var/run/netns/cni-daff393b-633f-de56-3e80-d40539e3f11c" Aug 13 07:18:52.546464 containerd[1462]: 2025-08-13 07:18:52.423 [INFO][4347] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Aug 13 07:18:52.546464 containerd[1462]: 2025-08-13 07:18:52.423 [INFO][4347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Aug 13 07:18:52.546464 containerd[1462]: 2025-08-13 07:18:52.512 [INFO][4381] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" HandleID="k8s-pod-network.cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:18:52.546464 containerd[1462]: 2025-08-13 07:18:52.513 [INFO][4381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:52.546464 containerd[1462]: 2025-08-13 07:18:52.513 [INFO][4381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:52.546464 containerd[1462]: 2025-08-13 07:18:52.526 [WARNING][4381] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" HandleID="k8s-pod-network.cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:18:52.546464 containerd[1462]: 2025-08-13 07:18:52.527 [INFO][4381] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" HandleID="k8s-pod-network.cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:18:52.546464 containerd[1462]: 2025-08-13 07:18:52.532 [INFO][4381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:52.546464 containerd[1462]: 2025-08-13 07:18:52.537 [INFO][4347] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Aug 13 07:18:52.547668 containerd[1462]: time="2025-08-13T07:18:52.546789240Z" level=info msg="TearDown network for sandbox \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\" successfully" Aug 13 07:18:52.547668 containerd[1462]: time="2025-08-13T07:18:52.546838578Z" level=info msg="StopPodSandbox for \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\" returns successfully" Aug 13 07:18:52.549436 containerd[1462]: time="2025-08-13T07:18:52.548899455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fb794d684-84gfw,Uid:8937aae4-009b-4f60-9764-2d5d28342995,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:18:52.574818 systemd[1]: Started cri-containerd-d04b1fa3e09882ad78488fc56c03dfc3314d6a8d68467a1593057ddddaa1dc83.scope - libcontainer container d04b1fa3e09882ad78488fc56c03dfc3314d6a8d68467a1593057ddddaa1dc83. Aug 13 07:18:52.607527 containerd[1462]: 2025-08-13 07:18:52.455 [INFO][4351] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Aug 13 07:18:52.607527 containerd[1462]: 2025-08-13 07:18:52.457 [INFO][4351] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" iface="eth0" netns="/var/run/netns/cni-ef4b0a82-5bff-6a0f-801a-0bf2764d5e59" Aug 13 07:18:52.607527 containerd[1462]: 2025-08-13 07:18:52.458 [INFO][4351] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" iface="eth0" netns="/var/run/netns/cni-ef4b0a82-5bff-6a0f-801a-0bf2764d5e59" Aug 13 07:18:52.607527 containerd[1462]: 2025-08-13 07:18:52.458 [INFO][4351] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" iface="eth0" netns="/var/run/netns/cni-ef4b0a82-5bff-6a0f-801a-0bf2764d5e59" Aug 13 07:18:52.607527 containerd[1462]: 2025-08-13 07:18:52.458 [INFO][4351] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Aug 13 07:18:52.607527 containerd[1462]: 2025-08-13 07:18:52.458 [INFO][4351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Aug 13 07:18:52.607527 containerd[1462]: 2025-08-13 07:18:52.566 [INFO][4388] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" HandleID="k8s-pod-network.711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:18:52.607527 containerd[1462]: 2025-08-13 07:18:52.566 [INFO][4388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:52.607527 containerd[1462]: 2025-08-13 07:18:52.566 [INFO][4388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:52.607527 containerd[1462]: 2025-08-13 07:18:52.584 [WARNING][4388] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" HandleID="k8s-pod-network.711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:18:52.607527 containerd[1462]: 2025-08-13 07:18:52.585 [INFO][4388] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" HandleID="k8s-pod-network.711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:18:52.607527 containerd[1462]: 2025-08-13 07:18:52.588 [INFO][4388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:52.607527 containerd[1462]: 2025-08-13 07:18:52.600 [INFO][4351] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Aug 13 07:18:52.610048 containerd[1462]: time="2025-08-13T07:18:52.608644195Z" level=info msg="TearDown network for sandbox \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\" successfully" Aug 13 07:18:52.610048 containerd[1462]: time="2025-08-13T07:18:52.608707524Z" level=info msg="StopPodSandbox for \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\" returns successfully" Aug 13 07:18:52.610754 containerd[1462]: time="2025-08-13T07:18:52.610699502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-bbxp6,Uid:d0cf14fd-3d11-43c2-a719-49dbd30906de,Namespace:calico-system,Attempt:1,}" Aug 13 07:18:52.652943 containerd[1462]: time="2025-08-13T07:18:52.652891171Z" level=info msg="StartContainer for \"d04b1fa3e09882ad78488fc56c03dfc3314d6a8d68467a1593057ddddaa1dc83\" returns successfully" Aug 13 07:18:52.851882 systemd-networkd[1377]: cali12d8b4e0ed2: Link UP Aug 13 07:18:52.856889 systemd-networkd[1377]: cali12d8b4e0ed2: Gained carrier Aug 13 07:18:52.872562 systemd[1]: run-containerd-runc-k8s.io-dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0-runc.Hz4hIw.mount: Deactivated successfully. Aug 13 07:18:52.872734 systemd[1]: run-netns-cni\x2ddaff393b\x2d633f\x2dde56\x2d3e80\x2dd40539e3f11c.mount: Deactivated successfully. Aug 13 07:18:52.872858 systemd[1]: run-netns-cni\x2def4b0a82\x2d5bff\x2d6a0f\x2d801a\x2d0bf2764d5e59.mount: Deactivated successfully. Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.670 [INFO][4427] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.692 [INFO][4427] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0 calico-apiserver-5fb794d684- calico-apiserver 8937aae4-009b-4f60-9764-2d5d28342995 922 0 2025-08-13 07:18:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fb794d684 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal calico-apiserver-5fb794d684-84gfw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali12d8b4e0ed2 [] [] }} ContainerID="7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-84gfw" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-" Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.692 [INFO][4427] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-84gfw" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.766 [INFO][4467] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" HandleID="k8s-pod-network.7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.767 [INFO][4467] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" HandleID="k8s-pod-network.7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", "pod":"calico-apiserver-5fb794d684-84gfw", "timestamp":"2025-08-13 07:18:52.766761306 +0000 UTC"}, Hostname:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.767 [INFO][4467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.767 [INFO][4467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.768 [INFO][4467] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal' Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.780 [INFO][4467] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.788 [INFO][4467] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.796 [INFO][4467] ipam/ipam.go 511: Trying affinity for 192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.800 [INFO][4467] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.804 [INFO][4467] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.804 [INFO][4467] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.128/26 handle="k8s-pod-network.7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.807 [INFO][4467] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10 Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.816 [INFO][4467] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.128/26 handle="k8s-pod-network.7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.830 [INFO][4467] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.132/26] block=192.168.35.128/26 handle="k8s-pod-network.7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.832 [INFO][4467] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.132/26] handle="k8s-pod-network.7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.833 [INFO][4467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:52.893421 containerd[1462]: 2025-08-13 07:18:52.834 [INFO][4467] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.132/26] IPv6=[] ContainerID="7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" HandleID="k8s-pod-network.7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:18:52.894906 containerd[1462]: 2025-08-13 07:18:52.842 [INFO][4427] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-84gfw" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0", GenerateName:"calico-apiserver-5fb794d684-", Namespace:"calico-apiserver", SelfLink:"", UID:"8937aae4-009b-4f60-9764-2d5d28342995", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fb794d684", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-5fb794d684-84gfw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12d8b4e0ed2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:52.894906 containerd[1462]: 2025-08-13 07:18:52.843 [INFO][4427] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.132/32] ContainerID="7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-84gfw" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:18:52.894906 containerd[1462]: 2025-08-13 07:18:52.843 [INFO][4427] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12d8b4e0ed2 ContainerID="7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-84gfw" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:18:52.894906 containerd[1462]: 2025-08-13 07:18:52.856 [INFO][4427] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-84gfw" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:18:52.894906 containerd[1462]: 2025-08-13 07:18:52.861 [INFO][4427] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-84gfw" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0", GenerateName:"calico-apiserver-5fb794d684-", Namespace:"calico-apiserver", SelfLink:"", UID:"8937aae4-009b-4f60-9764-2d5d28342995", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fb794d684", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10", Pod:"calico-apiserver-5fb794d684-84gfw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12d8b4e0ed2", MAC:"12:f1:e3:78:d2:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:52.894906 containerd[1462]: 2025-08-13 07:18:52.891 [INFO][4427] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-84gfw" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:18:52.948798 containerd[1462]: time="2025-08-13T07:18:52.941133080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:52.948798 containerd[1462]: time="2025-08-13T07:18:52.941213924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:52.948798 containerd[1462]: time="2025-08-13T07:18:52.941233799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:52.948798 containerd[1462]: time="2025-08-13T07:18:52.941363657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:52.946857 systemd-networkd[1377]: cali5938e54b343: Link UP Aug 13 07:18:52.950330 systemd-networkd[1377]: cali5938e54b343: Gained carrier Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.730 [INFO][4443] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.772 [INFO][4443] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0 goldmane-768f4c5c69- calico-system d0cf14fd-3d11-43c2-a719-49dbd30906de 923 0 2025-08-13 07:18:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal goldmane-768f4c5c69-bbxp6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5938e54b343 [] [] }} ContainerID="3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" Namespace="calico-system" Pod="goldmane-768f4c5c69-bbxp6" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-" Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.772 [INFO][4443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" Namespace="calico-system" Pod="goldmane-768f4c5c69-bbxp6" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.824 [INFO][4477] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" HandleID="k8s-pod-network.3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.824 [INFO][4477] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" HandleID="k8s-pod-network.3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", "pod":"goldmane-768f4c5c69-bbxp6", "timestamp":"2025-08-13 07:18:52.824315108 +0000 UTC"}, Hostname:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.824 [INFO][4477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.834 [INFO][4477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.835 [INFO][4477] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal' Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.881 [INFO][4477] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.891 [INFO][4477] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.898 [INFO][4477] ipam/ipam.go 511: Trying affinity for 192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.901 [INFO][4477] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.907 [INFO][4477] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.908 [INFO][4477] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.128/26 handle="k8s-pod-network.3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.912 [INFO][4477] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297 Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.923 [INFO][4477] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.128/26 handle="k8s-pod-network.3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.936 [INFO][4477] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.133/26] block=192.168.35.128/26 handle="k8s-pod-network.3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.937 [INFO][4477] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.133/26] handle="k8s-pod-network.3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.937 [INFO][4477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:52.992942 containerd[1462]: 2025-08-13 07:18:52.937 [INFO][4477] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.133/26] IPv6=[] ContainerID="3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" HandleID="k8s-pod-network.3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:18:52.996361 containerd[1462]: 2025-08-13 07:18:52.940 [INFO][4443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" Namespace="calico-system" Pod="goldmane-768f4c5c69-bbxp6" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"d0cf14fd-3d11-43c2-a719-49dbd30906de", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-768f4c5c69-bbxp6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5938e54b343", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:52.996361 containerd[1462]: 2025-08-13 07:18:52.940 [INFO][4443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.133/32] ContainerID="3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" Namespace="calico-system" Pod="goldmane-768f4c5c69-bbxp6" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:18:52.996361 containerd[1462]: 2025-08-13 07:18:52.940 [INFO][4443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5938e54b343 ContainerID="3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" Namespace="calico-system" Pod="goldmane-768f4c5c69-bbxp6" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:18:52.996361 containerd[1462]: 2025-08-13 07:18:52.956 [INFO][4443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" Namespace="calico-system" Pod="goldmane-768f4c5c69-bbxp6" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:18:52.996361 containerd[1462]: 2025-08-13 07:18:52.961 [INFO][4443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" Namespace="calico-system" Pod="goldmane-768f4c5c69-bbxp6" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"d0cf14fd-3d11-43c2-a719-49dbd30906de", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297", Pod:"goldmane-768f4c5c69-bbxp6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5938e54b343", MAC:"e2:74:9d:2b:79:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:52.996361 containerd[1462]: 2025-08-13 07:18:52.989 [INFO][4443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297" Namespace="calico-system" Pod="goldmane-768f4c5c69-bbxp6" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:18:53.007981 systemd[1]: Started cri-containerd-7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10.scope - libcontainer container 7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10. Aug 13 07:18:53.048587 containerd[1462]: time="2025-08-13T07:18:53.042007324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:53.048587 containerd[1462]: time="2025-08-13T07:18:53.042086708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:53.048587 containerd[1462]: time="2025-08-13T07:18:53.042116784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:53.048587 containerd[1462]: time="2025-08-13T07:18:53.042264031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:53.110240 systemd[1]: Started cri-containerd-3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297.scope - libcontainer container 3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297. Aug 13 07:18:53.182859 containerd[1462]: time="2025-08-13T07:18:53.182240809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fb794d684-84gfw,Uid:8937aae4-009b-4f60-9764-2d5d28342995,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10\"" Aug 13 07:18:53.207875 systemd-networkd[1377]: calic316141921e: Gained IPv6LL Aug 13 07:18:53.287889 containerd[1462]: time="2025-08-13T07:18:53.287773984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-bbxp6,Uid:d0cf14fd-3d11-43c2-a719-49dbd30906de,Namespace:calico-system,Attempt:1,} returns sandbox id \"3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297\"" Aug 13 07:18:53.358708 kubelet[2602]: I0813 07:18:53.357249 2602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:18:53.629943 kubelet[2602]: I0813 07:18:53.629701 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s4zz2" podStartSLOduration=41.62967245 podStartE2EDuration="41.62967245s" podCreationTimestamp="2025-08-13 07:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:18:53.627185025 +0000 UTC m=+47.656867352" watchObservedRunningTime="2025-08-13 07:18:53.62967245 +0000 UTC m=+47.659354778" Aug 13 07:18:53.911948 systemd-networkd[1377]: calibf5f020c0d2: Gained IPv6LL Aug 13 07:18:54.106206 systemd-networkd[1377]: cali5938e54b343: Gained IPv6LL Aug 13 07:18:54.360898 systemd-networkd[1377]: cali12d8b4e0ed2: Gained IPv6LL Aug 13 07:18:55.214558 containerd[1462]: time="2025-08-13T07:18:55.214011114Z" level=info msg="StopPodSandbox for \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\"" Aug 13 07:18:55.217093 containerd[1462]: time="2025-08-13T07:18:55.216628122Z" level=info msg="StopPodSandbox for \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\"" Aug 13 07:18:55.220018 containerd[1462]: time="2025-08-13T07:18:55.219576483Z" level=info msg="StopPodSandbox for \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\"" Aug 13 07:18:55.391572 kernel: bpftool[4694]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:18:55.672762 containerd[1462]: 2025-08-13 07:18:55.461 [INFO][4656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Aug 13 07:18:55.672762 containerd[1462]: 2025-08-13 07:18:55.463 [INFO][4656] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" iface="eth0" netns="/var/run/netns/cni-f354df2f-a349-19f4-903a-8e01de311196" Aug 13 07:18:55.672762 containerd[1462]: 2025-08-13 07:18:55.466 [INFO][4656] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" iface="eth0" netns="/var/run/netns/cni-f354df2f-a349-19f4-903a-8e01de311196" Aug 13 07:18:55.672762 containerd[1462]: 2025-08-13 07:18:55.470 [INFO][4656] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" iface="eth0" netns="/var/run/netns/cni-f354df2f-a349-19f4-903a-8e01de311196" Aug 13 07:18:55.672762 containerd[1462]: 2025-08-13 07:18:55.470 [INFO][4656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Aug 13 07:18:55.672762 containerd[1462]: 2025-08-13 07:18:55.471 [INFO][4656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Aug 13 07:18:55.672762 containerd[1462]: 2025-08-13 07:18:55.602 [INFO][4700] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" HandleID="k8s-pod-network.e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:18:55.672762 containerd[1462]: 2025-08-13 07:18:55.604 [INFO][4700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:55.672762 containerd[1462]: 2025-08-13 07:18:55.604 [INFO][4700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:55.672762 containerd[1462]: 2025-08-13 07:18:55.641 [WARNING][4700] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" HandleID="k8s-pod-network.e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:18:55.672762 containerd[1462]: 2025-08-13 07:18:55.642 [INFO][4700] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" HandleID="k8s-pod-network.e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:18:55.672762 containerd[1462]: 2025-08-13 07:18:55.646 [INFO][4700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:55.672762 containerd[1462]: 2025-08-13 07:18:55.658 [INFO][4656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Aug 13 07:18:55.676597 containerd[1462]: time="2025-08-13T07:18:55.676535493Z" level=info msg="TearDown network for sandbox \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\" successfully" Aug 13 07:18:55.676597 containerd[1462]: time="2025-08-13T07:18:55.676583925Z" level=info msg="StopPodSandbox for \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\" returns successfully" Aug 13 07:18:55.679533 containerd[1462]: time="2025-08-13T07:18:55.679178644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmnnx,Uid:9995649e-a9c2-4dd0-ab3a-469f68507e9a,Namespace:calico-system,Attempt:1,}" Aug 13 07:18:55.688308 systemd[1]: run-netns-cni\x2df354df2f\x2da349\x2d19f4\x2d903a\x2d8e01de311196.mount: Deactivated successfully. Aug 13 07:18:55.807287 containerd[1462]: 2025-08-13 07:18:55.471 [INFO][4653] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Aug 13 07:18:55.807287 containerd[1462]: 2025-08-13 07:18:55.473 [INFO][4653] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" iface="eth0" netns="/var/run/netns/cni-99c349de-f5f4-9d12-d45f-612b5bb4d71d" Aug 13 07:18:55.807287 containerd[1462]: 2025-08-13 07:18:55.473 [INFO][4653] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" iface="eth0" netns="/var/run/netns/cni-99c349de-f5f4-9d12-d45f-612b5bb4d71d" Aug 13 07:18:55.807287 containerd[1462]: 2025-08-13 07:18:55.475 [INFO][4653] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" iface="eth0" netns="/var/run/netns/cni-99c349de-f5f4-9d12-d45f-612b5bb4d71d" Aug 13 07:18:55.807287 containerd[1462]: 2025-08-13 07:18:55.475 [INFO][4653] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Aug 13 07:18:55.807287 containerd[1462]: 2025-08-13 07:18:55.475 [INFO][4653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Aug 13 07:18:55.807287 containerd[1462]: 2025-08-13 07:18:55.745 [INFO][4702] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" HandleID="k8s-pod-network.5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:18:55.807287 containerd[1462]: 2025-08-13 07:18:55.745 [INFO][4702] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:55.807287 containerd[1462]: 2025-08-13 07:18:55.745 [INFO][4702] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:55.807287 containerd[1462]: 2025-08-13 07:18:55.780 [WARNING][4702] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" HandleID="k8s-pod-network.5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:18:55.807287 containerd[1462]: 2025-08-13 07:18:55.780 [INFO][4702] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" HandleID="k8s-pod-network.5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:18:55.807287 containerd[1462]: 2025-08-13 07:18:55.784 [INFO][4702] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:55.807287 containerd[1462]: 2025-08-13 07:18:55.795 [INFO][4653] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Aug 13 07:18:55.813351 containerd[1462]: time="2025-08-13T07:18:55.813191777Z" level=info msg="TearDown network for sandbox \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\" successfully" Aug 13 07:18:55.816536 containerd[1462]: time="2025-08-13T07:18:55.815642380Z" level=info msg="StopPodSandbox for \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\" returns successfully" Aug 13 07:18:55.821986 systemd[1]: run-netns-cni\x2d99c349de\x2df5f4\x2d9d12\x2dd45f\x2d612b5bb4d71d.mount: Deactivated successfully. Aug 13 07:18:55.823277 containerd[1462]: time="2025-08-13T07:18:55.820247469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z22sd,Uid:d2864426-4b9a-4a74-b95d-9856eb5042a1,Namespace:kube-system,Attempt:1,}" Aug 13 07:18:55.842531 containerd[1462]: 2025-08-13 07:18:55.553 [INFO][4657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Aug 13 07:18:55.842531 containerd[1462]: 2025-08-13 07:18:55.554 [INFO][4657] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" iface="eth0" netns="/var/run/netns/cni-69569043-4190-7d55-4637-008ebff5c4c0" Aug 13 07:18:55.842531 containerd[1462]: 2025-08-13 07:18:55.554 [INFO][4657] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" iface="eth0" netns="/var/run/netns/cni-69569043-4190-7d55-4637-008ebff5c4c0" Aug 13 07:18:55.842531 containerd[1462]: 2025-08-13 07:18:55.557 [INFO][4657] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" iface="eth0" netns="/var/run/netns/cni-69569043-4190-7d55-4637-008ebff5c4c0" Aug 13 07:18:55.842531 containerd[1462]: 2025-08-13 07:18:55.557 [INFO][4657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Aug 13 07:18:55.842531 containerd[1462]: 2025-08-13 07:18:55.557 [INFO][4657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Aug 13 07:18:55.842531 containerd[1462]: 2025-08-13 07:18:55.788 [INFO][4712] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" HandleID="k8s-pod-network.8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:18:55.842531 containerd[1462]: 2025-08-13 07:18:55.789 [INFO][4712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:55.842531 containerd[1462]: 2025-08-13 07:18:55.789 [INFO][4712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:55.842531 containerd[1462]: 2025-08-13 07:18:55.811 [WARNING][4712] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" HandleID="k8s-pod-network.8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:18:55.842531 containerd[1462]: 2025-08-13 07:18:55.811 [INFO][4712] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" HandleID="k8s-pod-network.8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:18:55.842531 containerd[1462]: 2025-08-13 07:18:55.823 [INFO][4712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:55.842531 containerd[1462]: 2025-08-13 07:18:55.831 [INFO][4657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Aug 13 07:18:55.846898 containerd[1462]: time="2025-08-13T07:18:55.846674420Z" level=info msg="TearDown network for sandbox \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\" successfully" Aug 13 07:18:55.846898 containerd[1462]: time="2025-08-13T07:18:55.846721866Z" level=info msg="StopPodSandbox for \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\" returns successfully" Aug 13 07:18:55.850584 containerd[1462]: time="2025-08-13T07:18:55.849862356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fb794d684-rjd8h,Uid:b8b0bd01-1905-4ea2-9587-6ddd1435f3f6,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:18:55.855385 systemd[1]: run-netns-cni\x2d69569043\x2d4190\x2d7d55\x2d4637\x2d008ebff5c4c0.mount: Deactivated successfully. Aug 13 07:18:56.401497 systemd-networkd[1377]: calia0dbe95dea2: Link UP Aug 13 07:18:56.407871 systemd-networkd[1377]: calia0dbe95dea2: Gained carrier Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:55.970 [INFO][4720] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0 csi-node-driver- calico-system 9995649e-a9c2-4dd0-ab3a-469f68507e9a 963 0 2025-08-13 07:18:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal csi-node-driver-vmnnx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia0dbe95dea2 [] [] }} ContainerID="1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" Namespace="calico-system" Pod="csi-node-driver-vmnnx" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-" Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:55.972 [INFO][4720] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" Namespace="calico-system" Pod="csi-node-driver-vmnnx" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.226 [INFO][4755] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" HandleID="k8s-pod-network.1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.229 [INFO][4755] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" HandleID="k8s-pod-network.1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ed310), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", "pod":"csi-node-driver-vmnnx", "timestamp":"2025-08-13 07:18:56.226787865 +0000 UTC"}, Hostname:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.230 [INFO][4755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.230 [INFO][4755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.230 [INFO][4755] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal' Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.263 [INFO][4755] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.280 [INFO][4755] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.311 [INFO][4755] ipam/ipam.go 511: Trying affinity for 192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.317 [INFO][4755] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.331 [INFO][4755] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.334 [INFO][4755] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.128/26 handle="k8s-pod-network.1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.339 [INFO][4755] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6 Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.355 [INFO][4755] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.128/26 handle="k8s-pod-network.1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.376 [INFO][4755] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.134/26] block=192.168.35.128/26 handle="k8s-pod-network.1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.376 [INFO][4755] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.134/26] handle="k8s-pod-network.1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.376 [INFO][4755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:56.463140 containerd[1462]: 2025-08-13 07:18:56.376 [INFO][4755] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.134/26] IPv6=[] ContainerID="1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" HandleID="k8s-pod-network.1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:18:56.466466 containerd[1462]: 2025-08-13 07:18:56.393 [INFO][4720] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" Namespace="calico-system" Pod="csi-node-driver-vmnnx" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9995649e-a9c2-4dd0-ab3a-469f68507e9a", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-vmnnx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia0dbe95dea2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:56.466466 containerd[1462]: 2025-08-13 07:18:56.393 [INFO][4720] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.134/32] ContainerID="1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" Namespace="calico-system" Pod="csi-node-driver-vmnnx" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:18:56.466466 containerd[1462]: 2025-08-13 07:18:56.393 [INFO][4720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0dbe95dea2 ContainerID="1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" Namespace="calico-system" Pod="csi-node-driver-vmnnx" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:18:56.466466 containerd[1462]: 2025-08-13 07:18:56.410 [INFO][4720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" Namespace="calico-system" Pod="csi-node-driver-vmnnx" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:18:56.466466 containerd[1462]: 2025-08-13 07:18:56.413 [INFO][4720] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" Namespace="calico-system" Pod="csi-node-driver-vmnnx" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9995649e-a9c2-4dd0-ab3a-469f68507e9a", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6", Pod:"csi-node-driver-vmnnx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia0dbe95dea2", MAC:"72:a6:7a:29:82:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:56.466466 containerd[1462]: 2025-08-13 07:18:56.455 [INFO][4720] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6" Namespace="calico-system" Pod="csi-node-driver-vmnnx" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:18:56.512654 systemd-networkd[1377]: cali3a0aa9c5c46: Link UP Aug 13 07:18:56.513001 systemd-networkd[1377]: cali3a0aa9c5c46: Gained carrier Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.082 [INFO][4733] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0 coredns-668d6bf9bc- kube-system d2864426-4b9a-4a74-b95d-9856eb5042a1 964 0 2025-08-13 07:18:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal coredns-668d6bf9bc-z22sd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3a0aa9c5c46 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" Namespace="kube-system" Pod="coredns-668d6bf9bc-z22sd" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-" Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.082 [INFO][4733] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" Namespace="kube-system" Pod="coredns-668d6bf9bc-z22sd" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.371 [INFO][4765] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" HandleID="k8s-pod-network.d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.372 [INFO][4765] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" HandleID="k8s-pod-network.d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002acbf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", "pod":"coredns-668d6bf9bc-z22sd", "timestamp":"2025-08-13 07:18:56.370916236 +0000 UTC"}, Hostname:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.372 [INFO][4765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.376 [INFO][4765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.377 [INFO][4765] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal' Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.392 [INFO][4765] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.421 [INFO][4765] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.435 [INFO][4765] ipam/ipam.go 511: Trying affinity for 192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.438 [INFO][4765] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.442 [INFO][4765] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.442 [INFO][4765] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.128/26 handle="k8s-pod-network.d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.453 [INFO][4765] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003 Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.467 [INFO][4765] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.128/26 handle="k8s-pod-network.d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.489 [INFO][4765] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.135/26] block=192.168.35.128/26 handle="k8s-pod-network.d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.490 [INFO][4765] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.135/26] handle="k8s-pod-network.d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.490 [INFO][4765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:56.574958 containerd[1462]: 2025-08-13 07:18:56.490 [INFO][4765] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.135/26] IPv6=[] ContainerID="d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" HandleID="k8s-pod-network.d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:18:56.577619 containerd[1462]: 2025-08-13 07:18:56.502 [INFO][4733] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" Namespace="kube-system" Pod="coredns-668d6bf9bc-z22sd" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d2864426-4b9a-4a74-b95d-9856eb5042a1", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-668d6bf9bc-z22sd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a0aa9c5c46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:56.577619 containerd[1462]: 2025-08-13 07:18:56.504 [INFO][4733] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.135/32] ContainerID="d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" Namespace="kube-system" Pod="coredns-668d6bf9bc-z22sd" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:18:56.577619 containerd[1462]: 2025-08-13 07:18:56.504 [INFO][4733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a0aa9c5c46 ContainerID="d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" Namespace="kube-system" Pod="coredns-668d6bf9bc-z22sd" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:18:56.577619 containerd[1462]: 2025-08-13 07:18:56.518 [INFO][4733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" Namespace="kube-system" Pod="coredns-668d6bf9bc-z22sd" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:18:56.577619 containerd[1462]: 2025-08-13 07:18:56.525 [INFO][4733] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" Namespace="kube-system" Pod="coredns-668d6bf9bc-z22sd" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d2864426-4b9a-4a74-b95d-9856eb5042a1", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003", Pod:"coredns-668d6bf9bc-z22sd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a0aa9c5c46", MAC:"36:58:77:f0:5a:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:56.577619 containerd[1462]: 2025-08-13 07:18:56.561 [INFO][4733] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003" Namespace="kube-system" Pod="coredns-668d6bf9bc-z22sd" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:18:56.626614 containerd[1462]: time="2025-08-13T07:18:56.623199342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:56.626614 containerd[1462]: time="2025-08-13T07:18:56.623308719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:56.626614 containerd[1462]: time="2025-08-13T07:18:56.623330091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:56.626614 containerd[1462]: time="2025-08-13T07:18:56.623482990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:56.719117 systemd-networkd[1377]: cali5c7c4884c1c: Link UP Aug 13 07:18:56.722651 systemd-networkd[1377]: cali5c7c4884c1c: Gained carrier Aug 13 07:18:56.734369 containerd[1462]: time="2025-08-13T07:18:56.734196974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:56.734784 containerd[1462]: time="2025-08-13T07:18:56.734599911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:56.734784 containerd[1462]: time="2025-08-13T07:18:56.734686190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:56.735379 containerd[1462]: time="2025-08-13T07:18:56.735283922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.141 [INFO][4745] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0 calico-apiserver-5fb794d684- calico-apiserver b8b0bd01-1905-4ea2-9587-6ddd1435f3f6 966 0 2025-08-13 07:18:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fb794d684 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal calico-apiserver-5fb794d684-rjd8h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5c7c4884c1c [] [] }} ContainerID="150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-rjd8h" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-" Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.149 [INFO][4745] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-rjd8h" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.463 [INFO][4770] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" HandleID="k8s-pod-network.150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.464 [INFO][4770] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" HandleID="k8s-pod-network.150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042d040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", "pod":"calico-apiserver-5fb794d684-rjd8h", "timestamp":"2025-08-13 07:18:56.463844304 +0000 UTC"}, Hostname:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.464 [INFO][4770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.490 [INFO][4770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.491 [INFO][4770] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal' Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.525 [INFO][4770] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.544 [INFO][4770] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.585 [INFO][4770] ipam/ipam.go 511: Trying affinity for 192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.594 [INFO][4770] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.602 [INFO][4770] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.128/26 host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.602 [INFO][4770] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.128/26 handle="k8s-pod-network.150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.613 [INFO][4770] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04 Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.630 [INFO][4770] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.128/26 handle="k8s-pod-network.150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.660 [INFO][4770] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.136/26] block=192.168.35.128/26 handle="k8s-pod-network.150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.660 [INFO][4770] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.136/26] handle="k8s-pod-network.150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" host="ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal" Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.660 [INFO][4770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:56.792340 containerd[1462]: 2025-08-13 07:18:56.660 [INFO][4770] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.136/26] IPv6=[] ContainerID="150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" HandleID="k8s-pod-network.150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:18:56.793853 containerd[1462]: 2025-08-13 07:18:56.673 [INFO][4745] cni-plugin/k8s.go 418: Populated endpoint ContainerID="150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-rjd8h" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0", GenerateName:"calico-apiserver-5fb794d684-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8b0bd01-1905-4ea2-9587-6ddd1435f3f6", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fb794d684", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-5fb794d684-rjd8h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c7c4884c1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:56.793853 containerd[1462]: 2025-08-13 07:18:56.674 [INFO][4745] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.136/32] ContainerID="150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-rjd8h" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:18:56.793853 containerd[1462]: 2025-08-13 07:18:56.674 [INFO][4745] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c7c4884c1c ContainerID="150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-rjd8h" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:18:56.793853 containerd[1462]: 2025-08-13 07:18:56.718 [INFO][4745] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-rjd8h" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:18:56.793853 containerd[1462]: 2025-08-13 07:18:56.740 [INFO][4745] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-rjd8h" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0", GenerateName:"calico-apiserver-5fb794d684-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8b0bd01-1905-4ea2-9587-6ddd1435f3f6", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fb794d684", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04", Pod:"calico-apiserver-5fb794d684-rjd8h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c7c4884c1c", MAC:"fe:60:ea:40:a7:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:56.793853 containerd[1462]: 2025-08-13 07:18:56.780 [INFO][4745] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04" Namespace="calico-apiserver" Pod="calico-apiserver-5fb794d684-rjd8h" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:18:56.834651 systemd[1]: run-containerd-runc-k8s.io-1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6-runc.WuTxNT.mount: Deactivated successfully. Aug 13 07:18:56.847010 systemd[1]: Started cri-containerd-1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6.scope - libcontainer container 1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6. Aug 13 07:18:56.915752 systemd[1]: Started cri-containerd-d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003.scope - libcontainer container d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003. Aug 13 07:18:56.926692 systemd-networkd[1377]: vxlan.calico: Link UP Aug 13 07:18:56.926705 systemd-networkd[1377]: vxlan.calico: Gained carrier Aug 13 07:18:56.983999 containerd[1462]: time="2025-08-13T07:18:56.980326026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:56.983999 containerd[1462]: time="2025-08-13T07:18:56.980430718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:56.983999 containerd[1462]: time="2025-08-13T07:18:56.980461928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:56.983999 containerd[1462]: time="2025-08-13T07:18:56.980625341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:57.102420 systemd[1]: Started cri-containerd-150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04.scope - libcontainer container 150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04. Aug 13 07:18:57.116120 containerd[1462]: time="2025-08-13T07:18:57.116061861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmnnx,Uid:9995649e-a9c2-4dd0-ab3a-469f68507e9a,Namespace:calico-system,Attempt:1,} returns sandbox id \"1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6\"" Aug 13 07:18:57.225963 containerd[1462]: time="2025-08-13T07:18:57.225900971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z22sd,Uid:d2864426-4b9a-4a74-b95d-9856eb5042a1,Namespace:kube-system,Attempt:1,} returns sandbox id \"d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003\"" Aug 13 07:18:57.242329 containerd[1462]: time="2025-08-13T07:18:57.242162059Z" level=info msg="CreateContainer within sandbox \"d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:18:57.279315 containerd[1462]: time="2025-08-13T07:18:57.279198416Z" level=info msg="CreateContainer within sandbox \"d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da7fb899b9e1a495470b77785199122c59c8b2f2ec0c404f8bbe3d625f442b9a\"" Aug 13 07:18:57.281308 containerd[1462]: time="2025-08-13T07:18:57.281240830Z" level=info msg="StartContainer for \"da7fb899b9e1a495470b77785199122c59c8b2f2ec0c404f8bbe3d625f442b9a\"" Aug 13 07:18:57.418362 systemd[1]: Started cri-containerd-da7fb899b9e1a495470b77785199122c59c8b2f2ec0c404f8bbe3d625f442b9a.scope - libcontainer container da7fb899b9e1a495470b77785199122c59c8b2f2ec0c404f8bbe3d625f442b9a. Aug 13 07:18:57.491661 containerd[1462]: time="2025-08-13T07:18:57.491569534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fb794d684-rjd8h,Uid:b8b0bd01-1905-4ea2-9587-6ddd1435f3f6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04\"" Aug 13 07:18:57.548177 containerd[1462]: time="2025-08-13T07:18:57.548120744Z" level=info msg="StartContainer for \"da7fb899b9e1a495470b77785199122c59c8b2f2ec0c404f8bbe3d625f442b9a\" returns successfully" Aug 13 07:18:57.624136 systemd-networkd[1377]: calia0dbe95dea2: Gained IPv6LL Aug 13 07:18:57.844655 containerd[1462]: time="2025-08-13T07:18:57.841046743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:57.847336 containerd[1462]: time="2025-08-13T07:18:57.847176858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:18:57.849962 containerd[1462]: time="2025-08-13T07:18:57.849905908Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:57.855921 containerd[1462]: time="2025-08-13T07:18:57.855856073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:57.858081 containerd[1462]: time="2025-08-13T07:18:57.857851824Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.359405169s" Aug 13 07:18:57.858628 containerd[1462]: time="2025-08-13T07:18:57.858048970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:18:57.862201 containerd[1462]: time="2025-08-13T07:18:57.861854589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:18:57.880386 systemd-networkd[1377]: cali5c7c4884c1c: Gained IPv6LL Aug 13 07:18:57.890131 containerd[1462]: time="2025-08-13T07:18:57.889899204Z" level=info msg="CreateContainer within sandbox \"4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:18:57.933645 containerd[1462]: time="2025-08-13T07:18:57.933571132Z" level=info msg="CreateContainer within sandbox \"4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"71591586af03a4dc98be14995011c3a748662ba5a9963e74db5f51578f3c0414\"" Aug 13 07:18:57.935811 containerd[1462]: time="2025-08-13T07:18:57.935747516Z" level=info msg="StartContainer for \"71591586af03a4dc98be14995011c3a748662ba5a9963e74db5f51578f3c0414\"" Aug 13 07:18:58.000797 systemd[1]: Started cri-containerd-71591586af03a4dc98be14995011c3a748662ba5a9963e74db5f51578f3c0414.scope - libcontainer container 71591586af03a4dc98be14995011c3a748662ba5a9963e74db5f51578f3c0414. Aug 13 07:18:58.199573 containerd[1462]: time="2025-08-13T07:18:58.197214505Z" level=info msg="StartContainer for \"71591586af03a4dc98be14995011c3a748662ba5a9963e74db5f51578f3c0414\" returns successfully" Aug 13 07:18:58.264660 systemd-networkd[1377]: cali3a0aa9c5c46: Gained IPv6LL Aug 13 07:18:58.688570 kubelet[2602]: I0813 07:18:58.687916 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z22sd" podStartSLOduration=46.687881995 podStartE2EDuration="46.687881995s" podCreationTimestamp="2025-08-13 07:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:18:57.719355421 +0000 UTC m=+51.749037749" watchObservedRunningTime="2025-08-13 07:18:58.687881995 +0000 UTC m=+52.717564320" Aug 13 07:18:58.715239 kubelet[2602]: I0813 07:18:58.715047 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fbc6d7cb9-xtnlk" podStartSLOduration=25.350040331 podStartE2EDuration="30.715017943s" podCreationTimestamp="2025-08-13 07:18:28 +0000 UTC" firstStartedPulling="2025-08-13 07:18:52.496653588 +0000 UTC m=+46.526335906" lastFinishedPulling="2025-08-13 07:18:57.861631198 +0000 UTC m=+51.891313518" observedRunningTime="2025-08-13 07:18:58.692335169 +0000 UTC m=+52.722017497" watchObservedRunningTime="2025-08-13 07:18:58.715017943 +0000 UTC m=+52.744700270" Aug 13 07:18:58.841000 systemd-networkd[1377]: vxlan.calico: Gained IPv6LL Aug 13 07:19:00.859194 ntpd[1430]: Listen normally on 8 vxlan.calico 192.168.35.128:123 Aug 13 07:19:00.860569 ntpd[1430]: 13 Aug 07:19:00 ntpd[1430]: Listen normally on 8 vxlan.calico 192.168.35.128:123 Aug 13 07:19:00.860569 ntpd[1430]: 13 Aug 07:19:00 ntpd[1430]: Listen normally on 9 calibf5f020c0d2 [fe80::ecee:eeff:feee:eeee%5]:123 Aug 13 07:19:00.860569 ntpd[1430]: 13 Aug 07:19:00 ntpd[1430]: Listen normally on 10 calic316141921e [fe80::ecee:eeff:feee:eeee%6]:123 Aug 13 07:19:00.860569 ntpd[1430]: 13 Aug 07:19:00 ntpd[1430]: Listen normally on 11 cali12d8b4e0ed2 [fe80::ecee:eeff:feee:eeee%7]:123 Aug 13 07:19:00.860569 ntpd[1430]: 13 Aug 07:19:00 ntpd[1430]: Listen normally on 12 cali5938e54b343 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 07:19:00.860569 ntpd[1430]: 13 Aug 07:19:00 ntpd[1430]: Listen normally on 13 calia0dbe95dea2 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 07:19:00.860569 ntpd[1430]: 13 Aug 07:19:00 ntpd[1430]: Listen normally on 14 cali3a0aa9c5c46 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 07:19:00.860569 ntpd[1430]: 13 Aug 07:19:00 ntpd[1430]: Listen normally on 15 cali5c7c4884c1c [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 07:19:00.860569 ntpd[1430]: 13 Aug 07:19:00 ntpd[1430]: Listen normally on 16 vxlan.calico [fe80::6495:efff:fe52:2ac9%12]:123 Aug 13 07:19:00.859324 ntpd[1430]: Listen normally on 9 calibf5f020c0d2 [fe80::ecee:eeff:feee:eeee%5]:123 Aug 13 07:19:00.859402 ntpd[1430]: Listen normally on 10 calic316141921e [fe80::ecee:eeff:feee:eeee%6]:123 Aug 13 07:19:00.859462 ntpd[1430]: Listen normally on 11 cali12d8b4e0ed2 [fe80::ecee:eeff:feee:eeee%7]:123 Aug 13 07:19:00.859538 ntpd[1430]: Listen normally on 12 cali5938e54b343 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 07:19:00.859608 ntpd[1430]: Listen normally on 13 calia0dbe95dea2 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 07:19:00.859675 ntpd[1430]: Listen normally on 14 cali3a0aa9c5c46 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 07:19:00.859736 ntpd[1430]: Listen normally on 15 cali5c7c4884c1c [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 07:19:00.859796 ntpd[1430]: Listen normally on 16 vxlan.calico [fe80::6495:efff:fe52:2ac9%12]:123 Aug 13 07:19:01.369569 containerd[1462]: time="2025-08-13T07:19:01.369111904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:01.370620 containerd[1462]: time="2025-08-13T07:19:01.370538416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:19:01.371260 containerd[1462]: time="2025-08-13T07:19:01.371156219Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:01.376540 containerd[1462]: time="2025-08-13T07:19:01.375625919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:01.377310 containerd[1462]: time="2025-08-13T07:19:01.377274348Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.515344063s" Aug 13 07:19:01.377496 containerd[1462]: time="2025-08-13T07:19:01.377469511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:19:01.381929 containerd[1462]: time="2025-08-13T07:19:01.381083463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:19:01.384869 containerd[1462]: time="2025-08-13T07:19:01.384817337Z" level=info msg="CreateContainer within sandbox \"7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:19:01.405647 containerd[1462]: time="2025-08-13T07:19:01.405362742Z" level=info msg="CreateContainer within sandbox \"7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"85982665475bafe57dc11cf4888c1708181a4de3c832bef7c021be4a87a9a464\"" Aug 13 07:19:01.409327 containerd[1462]: time="2025-08-13T07:19:01.408544336Z" level=info msg="StartContainer for \"85982665475bafe57dc11cf4888c1708181a4de3c832bef7c021be4a87a9a464\"" Aug 13 07:19:01.466798 systemd[1]: Started cri-containerd-85982665475bafe57dc11cf4888c1708181a4de3c832bef7c021be4a87a9a464.scope - libcontainer container 85982665475bafe57dc11cf4888c1708181a4de3c832bef7c021be4a87a9a464. Aug 13 07:19:01.527567 containerd[1462]: time="2025-08-13T07:19:01.527475975Z" level=info msg="StartContainer for \"85982665475bafe57dc11cf4888c1708181a4de3c832bef7c021be4a87a9a464\" returns successfully" Aug 13 07:19:02.811804 kubelet[2602]: I0813 07:19:02.811171 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fb794d684-84gfw" podStartSLOduration=31.618707053 podStartE2EDuration="39.811145546s" podCreationTimestamp="2025-08-13 07:18:23 +0000 UTC" firstStartedPulling="2025-08-13 07:18:53.187142546 +0000 UTC m=+47.216824865" lastFinishedPulling="2025-08-13 07:19:01.379581037 +0000 UTC m=+55.409263358" observedRunningTime="2025-08-13 07:19:01.702960599 +0000 UTC m=+55.732642929" watchObservedRunningTime="2025-08-13 07:19:02.811145546 +0000 UTC m=+56.840827874" Aug 13 07:19:04.087306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1075763924.mount: Deactivated successfully. Aug 13 07:19:05.063706 containerd[1462]: time="2025-08-13T07:19:05.063637004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:05.067910 containerd[1462]: time="2025-08-13T07:19:05.067737966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:19:05.071712 containerd[1462]: time="2025-08-13T07:19:05.071242195Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:05.077431 containerd[1462]: time="2025-08-13T07:19:05.077370247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:05.078619 containerd[1462]: time="2025-08-13T07:19:05.078562071Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.697429457s" Aug 13 07:19:05.078740 containerd[1462]: time="2025-08-13T07:19:05.078625989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:19:05.083382 containerd[1462]: time="2025-08-13T07:19:05.081603422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:19:05.083382 containerd[1462]: time="2025-08-13T07:19:05.083359996Z" level=info msg="CreateContainer within sandbox \"3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:19:05.117075 containerd[1462]: time="2025-08-13T07:19:05.117022157Z" level=info msg="CreateContainer within sandbox \"3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c0f1dfd7cdbdcc860b802f3cc4e0842f7be5b28dc48f28f3bce0c9411b78f51d\"" Aug 13 07:19:05.119446 containerd[1462]: time="2025-08-13T07:19:05.119201582Z" level=info msg="StartContainer for \"c0f1dfd7cdbdcc860b802f3cc4e0842f7be5b28dc48f28f3bce0c9411b78f51d\"" Aug 13 07:19:05.182980 systemd[1]: Started cri-containerd-c0f1dfd7cdbdcc860b802f3cc4e0842f7be5b28dc48f28f3bce0c9411b78f51d.scope - libcontainer container c0f1dfd7cdbdcc860b802f3cc4e0842f7be5b28dc48f28f3bce0c9411b78f51d. Aug 13 07:19:05.247557 containerd[1462]: time="2025-08-13T07:19:05.247327625Z" level=info msg="StartContainer for \"c0f1dfd7cdbdcc860b802f3cc4e0842f7be5b28dc48f28f3bce0c9411b78f51d\" returns successfully" Aug 13 07:19:06.211766 containerd[1462]: time="2025-08-13T07:19:06.210978360Z" level=info msg="StopPodSandbox for \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\"" Aug 13 07:19:06.374270 containerd[1462]: 2025-08-13 07:19:06.284 [WARNING][5263] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"84e92942-4591-4943-868f-92a2efe7e6af", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0", Pod:"coredns-668d6bf9bc-s4zz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic316141921e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:06.374270 containerd[1462]: 2025-08-13 07:19:06.285 [INFO][5263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Aug 13 07:19:06.374270 containerd[1462]: 2025-08-13 07:19:06.285 [INFO][5263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" iface="eth0" netns="" Aug 13 07:19:06.374270 containerd[1462]: 2025-08-13 07:19:06.285 [INFO][5263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Aug 13 07:19:06.374270 containerd[1462]: 2025-08-13 07:19:06.285 [INFO][5263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Aug 13 07:19:06.374270 containerd[1462]: 2025-08-13 07:19:06.351 [INFO][5271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" HandleID="k8s-pod-network.59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:19:06.374270 containerd[1462]: 2025-08-13 07:19:06.353 [INFO][5271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:06.374270 containerd[1462]: 2025-08-13 07:19:06.353 [INFO][5271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:06.374270 containerd[1462]: 2025-08-13 07:19:06.366 [WARNING][5271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" HandleID="k8s-pod-network.59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:19:06.374270 containerd[1462]: 2025-08-13 07:19:06.366 [INFO][5271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" HandleID="k8s-pod-network.59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:19:06.374270 containerd[1462]: 2025-08-13 07:19:06.369 [INFO][5271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:06.374270 containerd[1462]: 2025-08-13 07:19:06.371 [INFO][5263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Aug 13 07:19:06.375190 containerd[1462]: time="2025-08-13T07:19:06.374377766Z" level=info msg="TearDown network for sandbox \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\" successfully" Aug 13 07:19:06.375190 containerd[1462]: time="2025-08-13T07:19:06.374453422Z" level=info msg="StopPodSandbox for \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\" returns successfully" Aug 13 07:19:06.376335 containerd[1462]: time="2025-08-13T07:19:06.375540797Z" level=info msg="RemovePodSandbox for \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\"" Aug 13 07:19:06.376480 containerd[1462]: time="2025-08-13T07:19:06.376341328Z" level=info msg="Forcibly stopping sandbox \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\"" Aug 13 07:19:06.567299 containerd[1462]: 2025-08-13 07:19:06.471 [WARNING][5289] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"84e92942-4591-4943-868f-92a2efe7e6af", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"dd937a2d968a652dde87c0137710d6cff33884d6476b8e9405b2e66c5c41bdb0", Pod:"coredns-668d6bf9bc-s4zz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic316141921e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:06.567299 containerd[1462]: 2025-08-13 07:19:06.472 [INFO][5289] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Aug 13 07:19:06.567299 containerd[1462]: 2025-08-13 07:19:06.472 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" iface="eth0" netns="" Aug 13 07:19:06.567299 containerd[1462]: 2025-08-13 07:19:06.473 [INFO][5289] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Aug 13 07:19:06.567299 containerd[1462]: 2025-08-13 07:19:06.473 [INFO][5289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Aug 13 07:19:06.567299 containerd[1462]: 2025-08-13 07:19:06.541 [INFO][5297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" HandleID="k8s-pod-network.59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:19:06.567299 containerd[1462]: 2025-08-13 07:19:06.541 [INFO][5297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:06.567299 containerd[1462]: 2025-08-13 07:19:06.541 [INFO][5297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:06.567299 containerd[1462]: 2025-08-13 07:19:06.556 [WARNING][5297] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" HandleID="k8s-pod-network.59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:19:06.567299 containerd[1462]: 2025-08-13 07:19:06.557 [INFO][5297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" HandleID="k8s-pod-network.59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--s4zz2-eth0" Aug 13 07:19:06.567299 containerd[1462]: 2025-08-13 07:19:06.560 [INFO][5297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:06.567299 containerd[1462]: 2025-08-13 07:19:06.564 [INFO][5289] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8" Aug 13 07:19:06.567299 containerd[1462]: time="2025-08-13T07:19:06.567232681Z" level=info msg="TearDown network for sandbox \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\" successfully" Aug 13 07:19:06.576226 containerd[1462]: time="2025-08-13T07:19:06.575479858Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:06.576226 containerd[1462]: time="2025-08-13T07:19:06.575808586Z" level=info msg="RemovePodSandbox \"59073061deff68190f9a4e860867e76185698a5b4d352d26536dc724ac865ee8\" returns successfully" Aug 13 07:19:06.577145 containerd[1462]: time="2025-08-13T07:19:06.577030938Z" level=info msg="StopPodSandbox for \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\"" Aug 13 07:19:06.718735 containerd[1462]: time="2025-08-13T07:19:06.717643476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:06.729998 containerd[1462]: time="2025-08-13T07:19:06.729912776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:19:06.731920 containerd[1462]: time="2025-08-13T07:19:06.731285556Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:06.735073 containerd[1462]: time="2025-08-13T07:19:06.735009385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:06.737288 containerd[1462]: time="2025-08-13T07:19:06.737209083Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.655558416s" Aug 13 07:19:06.737288 containerd[1462]: time="2025-08-13T07:19:06.737264011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:19:06.737553 containerd[1462]: 2025-08-13 07:19:06.658 [WARNING][5311] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d2864426-4b9a-4a74-b95d-9856eb5042a1", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003", Pod:"coredns-668d6bf9bc-z22sd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a0aa9c5c46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:06.737553 containerd[1462]: 2025-08-13 07:19:06.658 [INFO][5311] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Aug 13 07:19:06.737553 containerd[1462]: 2025-08-13 07:19:06.659 [INFO][5311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" iface="eth0" netns="" Aug 13 07:19:06.737553 containerd[1462]: 2025-08-13 07:19:06.659 [INFO][5311] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Aug 13 07:19:06.737553 containerd[1462]: 2025-08-13 07:19:06.659 [INFO][5311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Aug 13 07:19:06.737553 containerd[1462]: 2025-08-13 07:19:06.704 [INFO][5319] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" HandleID="k8s-pod-network.5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:19:06.737553 containerd[1462]: 2025-08-13 07:19:06.704 [INFO][5319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:06.737553 containerd[1462]: 2025-08-13 07:19:06.706 [INFO][5319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:06.737553 containerd[1462]: 2025-08-13 07:19:06.717 [WARNING][5319] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" HandleID="k8s-pod-network.5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:19:06.737553 containerd[1462]: 2025-08-13 07:19:06.717 [INFO][5319] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" HandleID="k8s-pod-network.5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:19:06.737553 containerd[1462]: 2025-08-13 07:19:06.722 [INFO][5319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:06.737553 containerd[1462]: 2025-08-13 07:19:06.724 [INFO][5311] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Aug 13 07:19:06.737553 containerd[1462]: time="2025-08-13T07:19:06.737478976Z" level=info msg="TearDown network for sandbox \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\" successfully" Aug 13 07:19:06.738365 containerd[1462]: time="2025-08-13T07:19:06.737503434Z" level=info msg="StopPodSandbox for \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\" returns successfully" Aug 13 07:19:06.740095 containerd[1462]: time="2025-08-13T07:19:06.739535975Z" level=info msg="RemovePodSandbox for \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\"" Aug 13 07:19:06.740095 containerd[1462]: time="2025-08-13T07:19:06.739581871Z" level=info msg="Forcibly stopping sandbox \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\"" Aug 13 07:19:06.743153 containerd[1462]: time="2025-08-13T07:19:06.743105235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:19:06.746699 containerd[1462]: time="2025-08-13T07:19:06.746332787Z" level=info msg="CreateContainer within sandbox \"1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:19:06.812108 containerd[1462]: time="2025-08-13T07:19:06.812052029Z" level=info msg="CreateContainer within sandbox \"1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a6a0cff34ae4aefb24e3b3429b7d7b85d2e92c3174db22c5f2eb35cf72e14e88\"" Aug 13 07:19:06.815553 containerd[1462]: time="2025-08-13T07:19:06.815426529Z" level=info msg="StartContainer for \"a6a0cff34ae4aefb24e3b3429b7d7b85d2e92c3174db22c5f2eb35cf72e14e88\"" Aug 13 07:19:06.966837 systemd[1]: Started cri-containerd-a6a0cff34ae4aefb24e3b3429b7d7b85d2e92c3174db22c5f2eb35cf72e14e88.scope - libcontainer container a6a0cff34ae4aefb24e3b3429b7d7b85d2e92c3174db22c5f2eb35cf72e14e88. Aug 13 07:19:07.033247 containerd[1462]: time="2025-08-13T07:19:07.031982992Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:07.033426 containerd[1462]: time="2025-08-13T07:19:07.033319351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:19:07.046855 containerd[1462]: time="2025-08-13T07:19:07.046800990Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 303.635882ms" Aug 13 07:19:07.047126 containerd[1462]: time="2025-08-13T07:19:07.047097542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:19:07.062270 containerd[1462]: time="2025-08-13T07:19:07.062195547Z" level=info msg="CreateContainer within sandbox \"150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:19:07.096154 containerd[1462]: time="2025-08-13T07:19:07.095681488Z" level=info msg="StartContainer for \"a6a0cff34ae4aefb24e3b3429b7d7b85d2e92c3174db22c5f2eb35cf72e14e88\" returns successfully" Aug 13 07:19:07.101625 containerd[1462]: time="2025-08-13T07:19:07.101422142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:19:07.117725 containerd[1462]: 2025-08-13 07:19:06.952 [WARNING][5339] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d2864426-4b9a-4a74-b95d-9856eb5042a1", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"d008b86f7e501aa2754ae41d0d2e91b6b888bb485fe75bb599e7972464a97003", Pod:"coredns-668d6bf9bc-z22sd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a0aa9c5c46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:07.117725 containerd[1462]: 2025-08-13 07:19:06.954 [INFO][5339] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Aug 13 07:19:07.117725 containerd[1462]: 2025-08-13 07:19:06.954 [INFO][5339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" iface="eth0" netns="" Aug 13 07:19:07.117725 containerd[1462]: 2025-08-13 07:19:06.954 [INFO][5339] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Aug 13 07:19:07.117725 containerd[1462]: 2025-08-13 07:19:06.954 [INFO][5339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Aug 13 07:19:07.117725 containerd[1462]: 2025-08-13 07:19:07.068 [INFO][5378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" HandleID="k8s-pod-network.5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:19:07.117725 containerd[1462]: 2025-08-13 07:19:07.069 [INFO][5378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:07.117725 containerd[1462]: 2025-08-13 07:19:07.070 [INFO][5378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:07.117725 containerd[1462]: 2025-08-13 07:19:07.098 [WARNING][5378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" HandleID="k8s-pod-network.5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:19:07.117725 containerd[1462]: 2025-08-13 07:19:07.098 [INFO][5378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" HandleID="k8s-pod-network.5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--z22sd-eth0" Aug 13 07:19:07.117725 containerd[1462]: 2025-08-13 07:19:07.108 [INFO][5378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:07.117725 containerd[1462]: 2025-08-13 07:19:07.114 [INFO][5339] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726" Aug 13 07:19:07.119197 containerd[1462]: time="2025-08-13T07:19:07.117741026Z" level=info msg="TearDown network for sandbox \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\" successfully" Aug 13 07:19:07.120884 containerd[1462]: time="2025-08-13T07:19:07.120722607Z" level=info msg="CreateContainer within sandbox \"150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"533d844a49476eb1c6f1bbb0274922a789341cc2a5dce389f30e1d07c0578e0b\"" Aug 13 07:19:07.123393 containerd[1462]: time="2025-08-13T07:19:07.121875170Z" level=info msg="StartContainer for \"533d844a49476eb1c6f1bbb0274922a789341cc2a5dce389f30e1d07c0578e0b\"" Aug 13 07:19:07.129580 containerd[1462]: time="2025-08-13T07:19:07.129492194Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:07.129791 containerd[1462]: time="2025-08-13T07:19:07.129656837Z" level=info msg="RemovePodSandbox \"5646964b7079753eb92d46b72a36b24f695600e3a2c312ed8fa38087a9fde726\" returns successfully" Aug 13 07:19:07.131776 containerd[1462]: time="2025-08-13T07:19:07.131738895Z" level=info msg="StopPodSandbox for \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\"" Aug 13 07:19:07.172838 systemd[1]: Started cri-containerd-533d844a49476eb1c6f1bbb0274922a789341cc2a5dce389f30e1d07c0578e0b.scope - libcontainer container 533d844a49476eb1c6f1bbb0274922a789341cc2a5dce389f30e1d07c0578e0b. Aug 13 07:19:07.287806 containerd[1462]: time="2025-08-13T07:19:07.287760330Z" level=info msg="StartContainer for \"533d844a49476eb1c6f1bbb0274922a789341cc2a5dce389f30e1d07c0578e0b\" returns successfully" Aug 13 07:19:07.302266 containerd[1462]: 2025-08-13 07:19:07.229 [WARNING][5424] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0", GenerateName:"calico-kube-controllers-6fbc6d7cb9-", Namespace:"calico-system", SelfLink:"", UID:"454544c9-e57d-4404-ae95-88b611efc21a", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fbc6d7cb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7", Pod:"calico-kube-controllers-6fbc6d7cb9-xtnlk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibf5f020c0d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:07.302266 containerd[1462]: 2025-08-13 07:19:07.229 [INFO][5424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Aug 13 07:19:07.302266 containerd[1462]: 2025-08-13 07:19:07.229 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" iface="eth0" netns="" Aug 13 07:19:07.302266 containerd[1462]: 2025-08-13 07:19:07.230 [INFO][5424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Aug 13 07:19:07.302266 containerd[1462]: 2025-08-13 07:19:07.230 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Aug 13 07:19:07.302266 containerd[1462]: 2025-08-13 07:19:07.275 [INFO][5447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" HandleID="k8s-pod-network.df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:19:07.302266 containerd[1462]: 2025-08-13 07:19:07.276 [INFO][5447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:07.302266 containerd[1462]: 2025-08-13 07:19:07.276 [INFO][5447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:07.302266 containerd[1462]: 2025-08-13 07:19:07.289 [WARNING][5447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" HandleID="k8s-pod-network.df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:19:07.302266 containerd[1462]: 2025-08-13 07:19:07.289 [INFO][5447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" HandleID="k8s-pod-network.df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:19:07.302266 containerd[1462]: 2025-08-13 07:19:07.292 [INFO][5447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:07.302266 containerd[1462]: 2025-08-13 07:19:07.298 [INFO][5424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Aug 13 07:19:07.304444 containerd[1462]: time="2025-08-13T07:19:07.303026708Z" level=info msg="TearDown network for sandbox \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\" successfully" Aug 13 07:19:07.304444 containerd[1462]: time="2025-08-13T07:19:07.303116760Z" level=info msg="StopPodSandbox for \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\" returns successfully" Aug 13 07:19:07.304444 containerd[1462]: time="2025-08-13T07:19:07.303847029Z" level=info msg="RemovePodSandbox for \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\"" Aug 13 07:19:07.304444 containerd[1462]: time="2025-08-13T07:19:07.303888153Z" level=info msg="Forcibly stopping sandbox \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\"" Aug 13 07:19:07.423353 containerd[1462]: 2025-08-13 07:19:07.368 [WARNING][5471] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0", GenerateName:"calico-kube-controllers-6fbc6d7cb9-", Namespace:"calico-system", SelfLink:"", UID:"454544c9-e57d-4404-ae95-88b611efc21a", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fbc6d7cb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"4571ab390287b0262f166cc86fe3ccb6233740ca10f25b25931306ffbcc7e0b7", Pod:"calico-kube-controllers-6fbc6d7cb9-xtnlk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibf5f020c0d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:07.423353 containerd[1462]: 2025-08-13 07:19:07.369 [INFO][5471] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Aug 13 07:19:07.423353 containerd[1462]: 2025-08-13 07:19:07.369 [INFO][5471] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" iface="eth0" netns="" Aug 13 07:19:07.423353 containerd[1462]: 2025-08-13 07:19:07.369 [INFO][5471] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Aug 13 07:19:07.423353 containerd[1462]: 2025-08-13 07:19:07.369 [INFO][5471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Aug 13 07:19:07.423353 containerd[1462]: 2025-08-13 07:19:07.406 [INFO][5480] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" HandleID="k8s-pod-network.df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:19:07.423353 containerd[1462]: 2025-08-13 07:19:07.406 [INFO][5480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:07.423353 containerd[1462]: 2025-08-13 07:19:07.406 [INFO][5480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:07.423353 containerd[1462]: 2025-08-13 07:19:07.416 [WARNING][5480] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" HandleID="k8s-pod-network.df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:19:07.423353 containerd[1462]: 2025-08-13 07:19:07.416 [INFO][5480] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" HandleID="k8s-pod-network.df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--kube--controllers--6fbc6d7cb9--xtnlk-eth0" Aug 13 07:19:07.423353 containerd[1462]: 2025-08-13 07:19:07.419 [INFO][5480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:07.423353 containerd[1462]: 2025-08-13 07:19:07.421 [INFO][5471] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423" Aug 13 07:19:07.426321 containerd[1462]: time="2025-08-13T07:19:07.423434028Z" level=info msg="TearDown network for sandbox \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\" successfully" Aug 13 07:19:07.429606 containerd[1462]: time="2025-08-13T07:19:07.429367012Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:07.429606 containerd[1462]: time="2025-08-13T07:19:07.429457768Z" level=info msg="RemovePodSandbox \"df92e7ce584ead9a641f95479a424ca91f44a586ecf63a010553ee1f55e2e423\" returns successfully" Aug 13 07:19:07.430395 containerd[1462]: time="2025-08-13T07:19:07.430303716Z" level=info msg="StopPodSandbox for \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\"" Aug 13 07:19:07.544243 containerd[1462]: 2025-08-13 07:19:07.488 [WARNING][5496] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9995649e-a9c2-4dd0-ab3a-469f68507e9a", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6", Pod:"csi-node-driver-vmnnx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia0dbe95dea2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:07.544243 containerd[1462]: 2025-08-13 07:19:07.488 [INFO][5496] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Aug 13 07:19:07.544243 containerd[1462]: 2025-08-13 07:19:07.488 [INFO][5496] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" iface="eth0" netns="" Aug 13 07:19:07.544243 containerd[1462]: 2025-08-13 07:19:07.488 [INFO][5496] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Aug 13 07:19:07.544243 containerd[1462]: 2025-08-13 07:19:07.488 [INFO][5496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Aug 13 07:19:07.544243 containerd[1462]: 2025-08-13 07:19:07.526 [INFO][5503] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" HandleID="k8s-pod-network.e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:19:07.544243 containerd[1462]: 2025-08-13 07:19:07.526 [INFO][5503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:07.544243 containerd[1462]: 2025-08-13 07:19:07.527 [INFO][5503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:07.544243 containerd[1462]: 2025-08-13 07:19:07.536 [WARNING][5503] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" HandleID="k8s-pod-network.e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:19:07.544243 containerd[1462]: 2025-08-13 07:19:07.537 [INFO][5503] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" HandleID="k8s-pod-network.e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:19:07.544243 containerd[1462]: 2025-08-13 07:19:07.539 [INFO][5503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:07.544243 containerd[1462]: 2025-08-13 07:19:07.541 [INFO][5496] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Aug 13 07:19:07.544243 containerd[1462]: time="2025-08-13T07:19:07.543990456Z" level=info msg="TearDown network for sandbox \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\" successfully" Aug 13 07:19:07.544243 containerd[1462]: time="2025-08-13T07:19:07.544025878Z" level=info msg="StopPodSandbox for \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\" returns successfully" Aug 13 07:19:07.546769 containerd[1462]: time="2025-08-13T07:19:07.544758826Z" level=info msg="RemovePodSandbox for \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\"" Aug 13 07:19:07.546769 containerd[1462]: time="2025-08-13T07:19:07.544796158Z" level=info msg="Forcibly stopping sandbox \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\"" Aug 13 07:19:07.693583 containerd[1462]: 2025-08-13 07:19:07.622 [WARNING][5518] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9995649e-a9c2-4dd0-ab3a-469f68507e9a", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6", Pod:"csi-node-driver-vmnnx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia0dbe95dea2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:07.693583 containerd[1462]: 2025-08-13 07:19:07.622 [INFO][5518] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Aug 13 07:19:07.693583 containerd[1462]: 2025-08-13 07:19:07.623 [INFO][5518] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" iface="eth0" netns="" Aug 13 07:19:07.693583 containerd[1462]: 2025-08-13 07:19:07.623 [INFO][5518] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Aug 13 07:19:07.693583 containerd[1462]: 2025-08-13 07:19:07.623 [INFO][5518] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Aug 13 07:19:07.693583 containerd[1462]: 2025-08-13 07:19:07.675 [INFO][5525] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" HandleID="k8s-pod-network.e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:19:07.693583 containerd[1462]: 2025-08-13 07:19:07.675 [INFO][5525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:07.693583 containerd[1462]: 2025-08-13 07:19:07.675 [INFO][5525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:07.693583 containerd[1462]: 2025-08-13 07:19:07.685 [WARNING][5525] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" HandleID="k8s-pod-network.e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:19:07.693583 containerd[1462]: 2025-08-13 07:19:07.685 [INFO][5525] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" HandleID="k8s-pod-network.e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-csi--node--driver--vmnnx-eth0" Aug 13 07:19:07.693583 containerd[1462]: 2025-08-13 07:19:07.687 [INFO][5525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:07.693583 containerd[1462]: 2025-08-13 07:19:07.689 [INFO][5518] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a" Aug 13 07:19:07.693583 containerd[1462]: time="2025-08-13T07:19:07.692381350Z" level=info msg="TearDown network for sandbox \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\" successfully" Aug 13 07:19:07.698837 containerd[1462]: time="2025-08-13T07:19:07.698750249Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:07.699227 containerd[1462]: time="2025-08-13T07:19:07.699100733Z" level=info msg="RemovePodSandbox \"e81e2c42a4ebe4d20b89c51d0c79c97c13dc20b49446c52da336f02964c1703a\" returns successfully" Aug 13 07:19:07.700755 containerd[1462]: time="2025-08-13T07:19:07.700493860Z" level=info msg="StopPodSandbox for \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\"" Aug 13 07:19:07.761478 kubelet[2602]: I0813 07:19:07.761173 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-bbxp6" podStartSLOduration=28.972372571 podStartE2EDuration="40.761143974s" podCreationTimestamp="2025-08-13 07:18:27 +0000 UTC" firstStartedPulling="2025-08-13 07:18:53.291422141 +0000 UTC m=+47.321104722" lastFinishedPulling="2025-08-13 07:19:05.08019382 +0000 UTC m=+59.109876125" observedRunningTime="2025-08-13 07:19:05.743794682 +0000 UTC m=+59.773477006" watchObservedRunningTime="2025-08-13 07:19:07.761143974 +0000 UTC m=+61.790826302" Aug 13 07:19:07.767715 kubelet[2602]: I0813 07:19:07.767628 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fb794d684-rjd8h" podStartSLOduration=35.21716445 podStartE2EDuration="44.767596634s" podCreationTimestamp="2025-08-13 07:18:23 +0000 UTC" firstStartedPulling="2025-08-13 07:18:57.500824395 +0000 UTC m=+51.530506710" lastFinishedPulling="2025-08-13 07:19:07.051256574 +0000 UTC m=+61.080938894" observedRunningTime="2025-08-13 07:19:07.766013536 +0000 UTC m=+61.795695863" watchObservedRunningTime="2025-08-13 07:19:07.767596634 +0000 UTC m=+61.797278963" Aug 13 07:19:07.931096 containerd[1462]: 2025-08-13 07:19:07.856 [WARNING][5539] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"d0cf14fd-3d11-43c2-a719-49dbd30906de", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297", Pod:"goldmane-768f4c5c69-bbxp6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5938e54b343", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:07.931096 containerd[1462]: 2025-08-13 07:19:07.857 [INFO][5539] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Aug 13 07:19:07.931096 containerd[1462]: 2025-08-13 07:19:07.857 [INFO][5539] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" iface="eth0" netns="" Aug 13 07:19:07.931096 containerd[1462]: 2025-08-13 07:19:07.858 [INFO][5539] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Aug 13 07:19:07.931096 containerd[1462]: 2025-08-13 07:19:07.858 [INFO][5539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Aug 13 07:19:07.931096 containerd[1462]: 2025-08-13 07:19:07.910 [INFO][5566] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" HandleID="k8s-pod-network.711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:19:07.931096 containerd[1462]: 2025-08-13 07:19:07.910 [INFO][5566] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:07.931096 containerd[1462]: 2025-08-13 07:19:07.910 [INFO][5566] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:07.931096 containerd[1462]: 2025-08-13 07:19:07.920 [WARNING][5566] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" HandleID="k8s-pod-network.711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:19:07.931096 containerd[1462]: 2025-08-13 07:19:07.921 [INFO][5566] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" HandleID="k8s-pod-network.711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:19:07.931096 containerd[1462]: 2025-08-13 07:19:07.925 [INFO][5566] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:07.931096 containerd[1462]: 2025-08-13 07:19:07.928 [INFO][5539] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Aug 13 07:19:07.931096 containerd[1462]: time="2025-08-13T07:19:07.930913912Z" level=info msg="TearDown network for sandbox \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\" successfully" Aug 13 07:19:07.931096 containerd[1462]: time="2025-08-13T07:19:07.930949600Z" level=info msg="StopPodSandbox for \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\" returns successfully" Aug 13 07:19:07.933912 containerd[1462]: time="2025-08-13T07:19:07.932926380Z" level=info msg="RemovePodSandbox for \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\"" Aug 13 07:19:07.933912 containerd[1462]: time="2025-08-13T07:19:07.932973485Z" level=info msg="Forcibly stopping sandbox \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\"" Aug 13 07:19:08.113580 containerd[1462]: 2025-08-13 07:19:08.018 [WARNING][5583] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"d0cf14fd-3d11-43c2-a719-49dbd30906de", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"3f1ac4d6de79a675cacac337d4a9b780fd742a42453069cec62862ef49b8d297", Pod:"goldmane-768f4c5c69-bbxp6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5938e54b343", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:08.113580 containerd[1462]: 2025-08-13 07:19:08.028 [INFO][5583] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Aug 13 07:19:08.113580 containerd[1462]: 2025-08-13 07:19:08.028 [INFO][5583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" iface="eth0" netns="" Aug 13 07:19:08.113580 containerd[1462]: 2025-08-13 07:19:08.028 [INFO][5583] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Aug 13 07:19:08.113580 containerd[1462]: 2025-08-13 07:19:08.029 [INFO][5583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Aug 13 07:19:08.113580 containerd[1462]: 2025-08-13 07:19:08.094 [INFO][5592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" HandleID="k8s-pod-network.711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:19:08.113580 containerd[1462]: 2025-08-13 07:19:08.094 [INFO][5592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:08.113580 containerd[1462]: 2025-08-13 07:19:08.095 [INFO][5592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:08.113580 containerd[1462]: 2025-08-13 07:19:08.106 [WARNING][5592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" HandleID="k8s-pod-network.711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:19:08.113580 containerd[1462]: 2025-08-13 07:19:08.106 [INFO][5592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" HandleID="k8s-pod-network.711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--bbxp6-eth0" Aug 13 07:19:08.113580 containerd[1462]: 2025-08-13 07:19:08.108 [INFO][5592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:08.113580 containerd[1462]: 2025-08-13 07:19:08.110 [INFO][5583] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60" Aug 13 07:19:08.113580 containerd[1462]: time="2025-08-13T07:19:08.113386823Z" level=info msg="TearDown network for sandbox \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\" successfully" Aug 13 07:19:08.126797 containerd[1462]: time="2025-08-13T07:19:08.126727690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:08.126993 containerd[1462]: time="2025-08-13T07:19:08.126832454Z" level=info msg="RemovePodSandbox \"711f9b2b0f6af11bda1ab96e2080e9f632337e277353b43ca6fb49a48b607b60\" returns successfully" Aug 13 07:19:08.128335 containerd[1462]: time="2025-08-13T07:19:08.127904327Z" level=info msg="StopPodSandbox for \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\"" Aug 13 07:19:08.267259 containerd[1462]: 2025-08-13 07:19:08.196 [WARNING][5607] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0", GenerateName:"calico-apiserver-5fb794d684-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8b0bd01-1905-4ea2-9587-6ddd1435f3f6", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fb794d684", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04", Pod:"calico-apiserver-5fb794d684-rjd8h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c7c4884c1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:08.267259 containerd[1462]: 2025-08-13 07:19:08.197 [INFO][5607] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Aug 13 07:19:08.267259 containerd[1462]: 2025-08-13 07:19:08.197 [INFO][5607] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" iface="eth0" netns="" Aug 13 07:19:08.267259 containerd[1462]: 2025-08-13 07:19:08.197 [INFO][5607] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Aug 13 07:19:08.267259 containerd[1462]: 2025-08-13 07:19:08.197 [INFO][5607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Aug 13 07:19:08.267259 containerd[1462]: 2025-08-13 07:19:08.246 [INFO][5614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" HandleID="k8s-pod-network.8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:19:08.267259 containerd[1462]: 2025-08-13 07:19:08.247 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:08.267259 containerd[1462]: 2025-08-13 07:19:08.247 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:08.267259 containerd[1462]: 2025-08-13 07:19:08.260 [WARNING][5614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" HandleID="k8s-pod-network.8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:19:08.267259 containerd[1462]: 2025-08-13 07:19:08.260 [INFO][5614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" HandleID="k8s-pod-network.8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:19:08.267259 containerd[1462]: 2025-08-13 07:19:08.262 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:08.267259 containerd[1462]: 2025-08-13 07:19:08.264 [INFO][5607] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Aug 13 07:19:08.267259 containerd[1462]: time="2025-08-13T07:19:08.267204564Z" level=info msg="TearDown network for sandbox \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\" successfully" Aug 13 07:19:08.268889 containerd[1462]: time="2025-08-13T07:19:08.268849697Z" level=info msg="StopPodSandbox for \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\" returns successfully" Aug 13 07:19:08.269774 containerd[1462]: time="2025-08-13T07:19:08.269717327Z" level=info msg="RemovePodSandbox for \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\"" Aug 13 07:19:08.270253 containerd[1462]: time="2025-08-13T07:19:08.269786058Z" level=info msg="Forcibly stopping sandbox \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\"" Aug 13 07:19:08.412285 containerd[1462]: 2025-08-13 07:19:08.346 [WARNING][5628] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0", GenerateName:"calico-apiserver-5fb794d684-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8b0bd01-1905-4ea2-9587-6ddd1435f3f6", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fb794d684", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"150b6ad71482f478d6fb50af0832d586d7e008726baa229c1ce5715f2a825c04", Pod:"calico-apiserver-5fb794d684-rjd8h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c7c4884c1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:08.412285 containerd[1462]: 2025-08-13 07:19:08.346 [INFO][5628] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Aug 13 07:19:08.412285 containerd[1462]: 2025-08-13 07:19:08.346 [INFO][5628] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" iface="eth0" netns="" Aug 13 07:19:08.412285 containerd[1462]: 2025-08-13 07:19:08.346 [INFO][5628] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Aug 13 07:19:08.412285 containerd[1462]: 2025-08-13 07:19:08.346 [INFO][5628] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Aug 13 07:19:08.412285 containerd[1462]: 2025-08-13 07:19:08.391 [INFO][5635] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" HandleID="k8s-pod-network.8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:19:08.412285 containerd[1462]: 2025-08-13 07:19:08.391 [INFO][5635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:08.412285 containerd[1462]: 2025-08-13 07:19:08.391 [INFO][5635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:08.412285 containerd[1462]: 2025-08-13 07:19:08.405 [WARNING][5635] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" HandleID="k8s-pod-network.8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:19:08.412285 containerd[1462]: 2025-08-13 07:19:08.405 [INFO][5635] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" HandleID="k8s-pod-network.8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--rjd8h-eth0" Aug 13 07:19:08.412285 containerd[1462]: 2025-08-13 07:19:08.407 [INFO][5635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:08.412285 containerd[1462]: 2025-08-13 07:19:08.409 [INFO][5628] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0" Aug 13 07:19:08.414590 containerd[1462]: time="2025-08-13T07:19:08.412342336Z" level=info msg="TearDown network for sandbox \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\" successfully" Aug 13 07:19:08.420545 containerd[1462]: time="2025-08-13T07:19:08.418238844Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:08.420545 containerd[1462]: time="2025-08-13T07:19:08.418386447Z" level=info msg="RemovePodSandbox \"8c0f05cd3416be0497d908136fb5cb3d173d4d4f6d1d512585dc0219cb24d7d0\" returns successfully" Aug 13 07:19:08.420545 containerd[1462]: time="2025-08-13T07:19:08.419064822Z" level=info msg="StopPodSandbox for \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\"" Aug 13 07:19:08.564544 containerd[1462]: 2025-08-13 07:19:08.486 [WARNING][5649] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--65577c7dd--fkmsm-eth0" Aug 13 07:19:08.564544 containerd[1462]: 2025-08-13 07:19:08.487 [INFO][5649] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Aug 13 07:19:08.564544 containerd[1462]: 2025-08-13 07:19:08.487 [INFO][5649] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" iface="eth0" netns="" Aug 13 07:19:08.564544 containerd[1462]: 2025-08-13 07:19:08.487 [INFO][5649] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Aug 13 07:19:08.564544 containerd[1462]: 2025-08-13 07:19:08.487 [INFO][5649] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Aug 13 07:19:08.564544 containerd[1462]: 2025-08-13 07:19:08.536 [INFO][5656] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" HandleID="k8s-pod-network.a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--65577c7dd--fkmsm-eth0" Aug 13 07:19:08.564544 containerd[1462]: 2025-08-13 07:19:08.537 [INFO][5656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:08.564544 containerd[1462]: 2025-08-13 07:19:08.538 [INFO][5656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:08.564544 containerd[1462]: 2025-08-13 07:19:08.553 [WARNING][5656] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" HandleID="k8s-pod-network.a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--65577c7dd--fkmsm-eth0" Aug 13 07:19:08.564544 containerd[1462]: 2025-08-13 07:19:08.553 [INFO][5656] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" HandleID="k8s-pod-network.a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--65577c7dd--fkmsm-eth0" Aug 13 07:19:08.564544 containerd[1462]: 2025-08-13 07:19:08.555 [INFO][5656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:08.564544 containerd[1462]: 2025-08-13 07:19:08.559 [INFO][5649] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Aug 13 07:19:08.564544 containerd[1462]: time="2025-08-13T07:19:08.563724676Z" level=info msg="TearDown network for sandbox \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\" successfully" Aug 13 07:19:08.564544 containerd[1462]: time="2025-08-13T07:19:08.563757206Z" level=info msg="StopPodSandbox for \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\" returns successfully" Aug 13 07:19:08.565441 containerd[1462]: time="2025-08-13T07:19:08.564712362Z" level=info msg="RemovePodSandbox for \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\"" Aug 13 07:19:08.565441 containerd[1462]: time="2025-08-13T07:19:08.564749870Z" level=info msg="Forcibly stopping sandbox \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\"" Aug 13 07:19:08.741208 kubelet[2602]: I0813 07:19:08.740654 2602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:19:08.791591 containerd[1462]: 2025-08-13 07:19:08.680 [WARNING][5671] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" WorkloadEndpoint="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--65577c7dd--fkmsm-eth0" Aug 13 07:19:08.791591 containerd[1462]: 2025-08-13 07:19:08.681 [INFO][5671] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Aug 13 07:19:08.791591 containerd[1462]: 2025-08-13 07:19:08.681 [INFO][5671] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" iface="eth0" netns="" Aug 13 07:19:08.791591 containerd[1462]: 2025-08-13 07:19:08.681 [INFO][5671] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Aug 13 07:19:08.791591 containerd[1462]: 2025-08-13 07:19:08.681 [INFO][5671] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Aug 13 07:19:08.791591 containerd[1462]: 2025-08-13 07:19:08.768 [INFO][5684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" HandleID="k8s-pod-network.a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--65577c7dd--fkmsm-eth0" Aug 13 07:19:08.791591 containerd[1462]: 2025-08-13 07:19:08.768 [INFO][5684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:08.791591 containerd[1462]: 2025-08-13 07:19:08.768 [INFO][5684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:08.791591 containerd[1462]: 2025-08-13 07:19:08.780 [WARNING][5684] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" HandleID="k8s-pod-network.a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--65577c7dd--fkmsm-eth0" Aug 13 07:19:08.791591 containerd[1462]: 2025-08-13 07:19:08.780 [INFO][5684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" HandleID="k8s-pod-network.a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-whisker--65577c7dd--fkmsm-eth0" Aug 13 07:19:08.791591 containerd[1462]: 2025-08-13 07:19:08.783 [INFO][5684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:08.791591 containerd[1462]: 2025-08-13 07:19:08.788 [INFO][5671] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a" Aug 13 07:19:08.793368 containerd[1462]: time="2025-08-13T07:19:08.791606605Z" level=info msg="TearDown network for sandbox \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\" successfully" Aug 13 07:19:08.804256 containerd[1462]: time="2025-08-13T07:19:08.803763727Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:08.804256 containerd[1462]: time="2025-08-13T07:19:08.803870032Z" level=info msg="RemovePodSandbox \"a6154f84ed79096fa76ccf5cbcfc58716800d608d542fbf9a7a9529adf13640a\" returns successfully" Aug 13 07:19:08.805420 containerd[1462]: time="2025-08-13T07:19:08.804996746Z" level=info msg="StopPodSandbox for \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\"" Aug 13 07:19:09.095363 containerd[1462]: 2025-08-13 07:19:08.953 [WARNING][5704] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0", GenerateName:"calico-apiserver-5fb794d684-", Namespace:"calico-apiserver", SelfLink:"", UID:"8937aae4-009b-4f60-9764-2d5d28342995", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fb794d684", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10", Pod:"calico-apiserver-5fb794d684-84gfw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12d8b4e0ed2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:09.095363 containerd[1462]: 2025-08-13 07:19:08.953 [INFO][5704] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Aug 13 07:19:09.095363 containerd[1462]: 2025-08-13 07:19:08.953 [INFO][5704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" iface="eth0" netns="" Aug 13 07:19:09.095363 containerd[1462]: 2025-08-13 07:19:08.955 [INFO][5704] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Aug 13 07:19:09.095363 containerd[1462]: 2025-08-13 07:19:08.955 [INFO][5704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Aug 13 07:19:09.095363 containerd[1462]: 2025-08-13 07:19:09.034 [INFO][5712] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" HandleID="k8s-pod-network.cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:19:09.095363 containerd[1462]: 2025-08-13 07:19:09.037 [INFO][5712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:09.095363 containerd[1462]: 2025-08-13 07:19:09.038 [INFO][5712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:09.095363 containerd[1462]: 2025-08-13 07:19:09.078 [WARNING][5712] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" HandleID="k8s-pod-network.cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:19:09.095363 containerd[1462]: 2025-08-13 07:19:09.080 [INFO][5712] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" HandleID="k8s-pod-network.cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:19:09.095363 containerd[1462]: 2025-08-13 07:19:09.083 [INFO][5712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:09.095363 containerd[1462]: 2025-08-13 07:19:09.091 [INFO][5704] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Aug 13 07:19:09.099124 containerd[1462]: time="2025-08-13T07:19:09.098684036Z" level=info msg="TearDown network for sandbox \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\" successfully" Aug 13 07:19:09.099124 containerd[1462]: time="2025-08-13T07:19:09.098774973Z" level=info msg="StopPodSandbox for \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\" returns successfully" Aug 13 07:19:09.101926 containerd[1462]: time="2025-08-13T07:19:09.101742761Z" level=info msg="RemovePodSandbox for \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\"" Aug 13 07:19:09.101926 containerd[1462]: time="2025-08-13T07:19:09.101791057Z" level=info msg="Forcibly stopping sandbox \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\"" Aug 13 07:19:09.387931 containerd[1462]: 2025-08-13 07:19:09.245 [WARNING][5726] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0", GenerateName:"calico-apiserver-5fb794d684-", Namespace:"calico-apiserver", SelfLink:"", UID:"8937aae4-009b-4f60-9764-2d5d28342995", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fb794d684", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-f405e864ae7be58182da.c.flatcar-212911.internal", ContainerID:"7089056acf938b808ff937c1ca7aa2d3e7ef0e9d3af9dd687ec984d5e481bc10", Pod:"calico-apiserver-5fb794d684-84gfw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12d8b4e0ed2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:09.387931 containerd[1462]: 2025-08-13 07:19:09.245 [INFO][5726] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Aug 13 07:19:09.387931 containerd[1462]: 2025-08-13 07:19:09.245 [INFO][5726] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" iface="eth0" netns="" Aug 13 07:19:09.387931 containerd[1462]: 2025-08-13 07:19:09.245 [INFO][5726] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Aug 13 07:19:09.387931 containerd[1462]: 2025-08-13 07:19:09.245 [INFO][5726] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Aug 13 07:19:09.387931 containerd[1462]: 2025-08-13 07:19:09.357 [INFO][5733] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" HandleID="k8s-pod-network.cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:19:09.387931 containerd[1462]: 2025-08-13 07:19:09.357 [INFO][5733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:09.387931 containerd[1462]: 2025-08-13 07:19:09.357 [INFO][5733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:09.387931 containerd[1462]: 2025-08-13 07:19:09.378 [WARNING][5733] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" HandleID="k8s-pod-network.cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:19:09.387931 containerd[1462]: 2025-08-13 07:19:09.378 [INFO][5733] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" HandleID="k8s-pod-network.cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Workload="ci--4081--3--5--f405e864ae7be58182da.c.flatcar--212911.internal-k8s-calico--apiserver--5fb794d684--84gfw-eth0" Aug 13 07:19:09.387931 containerd[1462]: 2025-08-13 07:19:09.381 [INFO][5733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:09.387931 containerd[1462]: 2025-08-13 07:19:09.384 [INFO][5726] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb" Aug 13 07:19:09.387931 containerd[1462]: time="2025-08-13T07:19:09.387879565Z" level=info msg="TearDown network for sandbox \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\" successfully" Aug 13 07:19:09.396132 containerd[1462]: time="2025-08-13T07:19:09.395926086Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:09.396132 containerd[1462]: time="2025-08-13T07:19:09.396027265Z" level=info msg="RemovePodSandbox \"cc67dd419a70634b4bf0052b3d7a14cf8c6a7d294c1ba9fcd88d3db30d2447eb\" returns successfully" Aug 13 07:19:09.577560 containerd[1462]: time="2025-08-13T07:19:09.575594086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:09.577560 containerd[1462]: time="2025-08-13T07:19:09.576861897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:19:09.579874 containerd[1462]: time="2025-08-13T07:19:09.579298675Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:09.582999 containerd[1462]: time="2025-08-13T07:19:09.582911129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:09.584331 containerd[1462]: time="2025-08-13T07:19:09.584158757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.482561865s" Aug 13 07:19:09.584331 containerd[1462]: time="2025-08-13T07:19:09.584208702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:19:09.590425 containerd[1462]: time="2025-08-13T07:19:09.590226485Z" level=info msg="CreateContainer within sandbox \"1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:19:09.620763 containerd[1462]: time="2025-08-13T07:19:09.620594875Z" level=info msg="CreateContainer within sandbox \"1a3339fef72c56781f487698882c2f4ac5af9dbfb3151f60f9d192cb78d60dd6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7c44fa12b0199e8b94069392969024f35752c366e72d80f57b60b07a7104fee0\"" Aug 13 07:19:09.624557 containerd[1462]: time="2025-08-13T07:19:09.622731567Z" level=info msg="StartContainer for \"7c44fa12b0199e8b94069392969024f35752c366e72d80f57b60b07a7104fee0\"" Aug 13 07:19:09.626215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460447468.mount: Deactivated successfully. Aug 13 07:19:09.711968 systemd[1]: Started cri-containerd-7c44fa12b0199e8b94069392969024f35752c366e72d80f57b60b07a7104fee0.scope - libcontainer container 7c44fa12b0199e8b94069392969024f35752c366e72d80f57b60b07a7104fee0. Aug 13 07:19:09.778932 containerd[1462]: time="2025-08-13T07:19:09.778304838Z" level=info msg="StartContainer for \"7c44fa12b0199e8b94069392969024f35752c366e72d80f57b60b07a7104fee0\" returns successfully" Aug 13 07:19:10.368005 kubelet[2602]: I0813 07:19:10.367968 2602 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:19:10.368005 kubelet[2602]: I0813 07:19:10.368008 2602 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:19:10.786234 systemd[1]: run-containerd-runc-k8s.io-c0f1dfd7cdbdcc860b802f3cc4e0842f7be5b28dc48f28f3bce0c9411b78f51d-runc.JnQ14L.mount: Deactivated successfully. Aug 13 07:19:10.811251 kubelet[2602]: I0813 07:19:10.811171 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vmnnx" podStartSLOduration=30.360999434 podStartE2EDuration="42.811144436s" podCreationTimestamp="2025-08-13 07:18:28 +0000 UTC" firstStartedPulling="2025-08-13 07:18:57.136756154 +0000 UTC m=+51.166438540" lastFinishedPulling="2025-08-13 07:19:09.586901221 +0000 UTC m=+63.616583542" observedRunningTime="2025-08-13 07:19:10.809137086 +0000 UTC m=+64.838819417" watchObservedRunningTime="2025-08-13 07:19:10.811144436 +0000 UTC m=+64.840826765" Aug 13 07:19:22.486074 systemd[1]: Started sshd@12-10.128.0.37:22-139.178.68.195:54982.service - OpenSSH per-connection server daemon (139.178.68.195:54982). Aug 13 07:19:22.805617 sshd[5833]: Accepted publickey for core from 139.178.68.195 port 54982 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:19:22.807047 sshd[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:22.817657 systemd-logind[1441]: New session 10 of user core. Aug 13 07:19:22.827824 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:19:23.234013 sshd[5833]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:23.243772 systemd[1]: sshd@12-10.128.0.37:22-139.178.68.195:54982.service: Deactivated successfully. Aug 13 07:19:23.249624 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:19:23.251499 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:19:23.254929 systemd-logind[1441]: Removed session 10. Aug 13 07:19:28.296692 systemd[1]: Started sshd@13-10.128.0.37:22-139.178.68.195:54986.service - OpenSSH per-connection server daemon (139.178.68.195:54986). Aug 13 07:19:28.602658 sshd[5877]: Accepted publickey for core from 139.178.68.195 port 54986 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:19:28.604887 sshd[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:28.612500 systemd-logind[1441]: New session 11 of user core. Aug 13 07:19:28.622066 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:19:29.035840 sshd[5877]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:29.047716 systemd[1]: sshd@13-10.128.0.37:22-139.178.68.195:54986.service: Deactivated successfully. Aug 13 07:19:29.054107 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:19:29.055932 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:19:29.058749 systemd-logind[1441]: Removed session 11. Aug 13 07:19:34.095805 systemd[1]: Started sshd@14-10.128.0.37:22-139.178.68.195:35766.service - OpenSSH per-connection server daemon (139.178.68.195:35766). Aug 13 07:19:34.408628 sshd[5910]: Accepted publickey for core from 139.178.68.195 port 35766 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:19:34.412612 sshd[5910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:34.426635 systemd-logind[1441]: New session 12 of user core. Aug 13 07:19:34.432902 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:19:34.758895 sshd[5910]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:34.769911 systemd[1]: sshd@14-10.128.0.37:22-139.178.68.195:35766.service: Deactivated successfully. Aug 13 07:19:34.774597 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:19:34.776902 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:19:34.779136 systemd-logind[1441]: Removed session 12. Aug 13 07:19:34.823906 systemd[1]: Started sshd@15-10.128.0.37:22-139.178.68.195:35768.service - OpenSSH per-connection server daemon (139.178.68.195:35768). Aug 13 07:19:35.137304 sshd[5924]: Accepted publickey for core from 139.178.68.195 port 35768 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:19:35.138253 sshd[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:35.146088 systemd-logind[1441]: New session 13 of user core. Aug 13 07:19:35.166822 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:19:35.551662 sshd[5924]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:35.562912 systemd[1]: sshd@15-10.128.0.37:22-139.178.68.195:35768.service: Deactivated successfully. Aug 13 07:19:35.571349 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:19:35.573127 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:19:35.578062 systemd-logind[1441]: Removed session 13. Aug 13 07:19:35.613976 systemd[1]: Started sshd@16-10.128.0.37:22-139.178.68.195:35778.service - OpenSSH per-connection server daemon (139.178.68.195:35778). Aug 13 07:19:35.931575 sshd[5935]: Accepted publickey for core from 139.178.68.195 port 35778 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:19:35.933851 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:35.945336 systemd-logind[1441]: New session 14 of user core. Aug 13 07:19:35.948771 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:19:36.281037 sshd[5935]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:36.290375 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:19:36.291892 systemd[1]: sshd@16-10.128.0.37:22-139.178.68.195:35778.service: Deactivated successfully. Aug 13 07:19:36.297134 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:19:36.303169 systemd-logind[1441]: Removed session 14. Aug 13 07:19:37.251252 kubelet[2602]: I0813 07:19:37.251199 2602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:19:41.343238 systemd[1]: Started sshd@17-10.128.0.37:22-139.178.68.195:37276.service - OpenSSH per-connection server daemon (139.178.68.195:37276). Aug 13 07:19:41.644774 sshd[5979]: Accepted publickey for core from 139.178.68.195 port 37276 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:19:41.646943 sshd[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:41.653840 systemd-logind[1441]: New session 15 of user core. Aug 13 07:19:41.659397 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:19:41.997414 sshd[5979]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:42.007114 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:19:42.008285 systemd[1]: sshd@17-10.128.0.37:22-139.178.68.195:37276.service: Deactivated successfully. Aug 13 07:19:42.014538 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:19:42.018496 systemd-logind[1441]: Removed session 15. Aug 13 07:19:47.061285 systemd[1]: Started sshd@18-10.128.0.37:22-139.178.68.195:37284.service - OpenSSH per-connection server daemon (139.178.68.195:37284). Aug 13 07:19:47.382427 sshd[5998]: Accepted publickey for core from 139.178.68.195 port 37284 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:19:47.385333 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:47.393167 systemd-logind[1441]: New session 16 of user core. Aug 13 07:19:47.401795 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:19:47.803350 sshd[5998]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:47.814836 systemd[1]: sshd@18-10.128.0.37:22-139.178.68.195:37284.service: Deactivated successfully. Aug 13 07:19:47.816152 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:19:47.820766 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:19:47.827385 systemd-logind[1441]: Removed session 16. Aug 13 07:19:52.860177 systemd[1]: Started sshd@19-10.128.0.37:22-139.178.68.195:39748.service - OpenSSH per-connection server daemon (139.178.68.195:39748). Aug 13 07:19:53.176360 sshd[6033]: Accepted publickey for core from 139.178.68.195 port 39748 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:19:53.175793 sshd[6033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:53.186050 systemd-logind[1441]: New session 17 of user core. Aug 13 07:19:53.193376 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:19:53.565874 sshd[6033]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:53.573590 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:19:53.574349 systemd[1]: sshd@19-10.128.0.37:22-139.178.68.195:39748.service: Deactivated successfully. Aug 13 07:19:53.578010 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:19:53.580358 systemd-logind[1441]: Removed session 17. Aug 13 07:19:58.624688 systemd[1]: Started sshd@20-10.128.0.37:22-139.178.68.195:39754.service - OpenSSH per-connection server daemon (139.178.68.195:39754). Aug 13 07:19:58.943747 sshd[6047]: Accepted publickey for core from 139.178.68.195 port 39754 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:19:58.947505 sshd[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:58.955038 systemd-logind[1441]: New session 18 of user core. Aug 13 07:19:58.962796 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:19:59.291875 sshd[6047]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:59.304187 systemd[1]: sshd@20-10.128.0.37:22-139.178.68.195:39754.service: Deactivated successfully. Aug 13 07:19:59.310852 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:19:59.313227 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:19:59.316050 systemd-logind[1441]: Removed session 18. Aug 13 07:19:59.352157 systemd[1]: Started sshd@21-10.128.0.37:22-139.178.68.195:39756.service - OpenSSH per-connection server daemon (139.178.68.195:39756). Aug 13 07:19:59.657242 sshd[6060]: Accepted publickey for core from 139.178.68.195 port 39756 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:19:59.659676 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:59.667775 systemd-logind[1441]: New session 19 of user core. Aug 13 07:19:59.676793 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:19:59.728597 systemd[1]: run-containerd-runc-k8s.io-71591586af03a4dc98be14995011c3a748662ba5a9963e74db5f51578f3c0414-runc.Vu3x3P.mount: Deactivated successfully. Aug 13 07:20:00.096871 sshd[6060]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:00.107687 systemd[1]: sshd@21-10.128.0.37:22-139.178.68.195:39756.service: Deactivated successfully. Aug 13 07:20:00.113946 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:20:00.118236 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:20:00.121100 systemd-logind[1441]: Removed session 19. Aug 13 07:20:00.159917 systemd[1]: Started sshd@22-10.128.0.37:22-139.178.68.195:47192.service - OpenSSH per-connection server daemon (139.178.68.195:47192). Aug 13 07:20:00.469366 sshd[6088]: Accepted publickey for core from 139.178.68.195 port 47192 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:20:00.469157 sshd[6088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:00.477450 systemd-logind[1441]: New session 20 of user core. Aug 13 07:20:00.484962 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:20:01.733886 sshd[6088]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:01.745504 systemd[1]: sshd@22-10.128.0.37:22-139.178.68.195:47192.service: Deactivated successfully. Aug 13 07:20:01.746168 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:20:01.754668 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:20:01.765216 systemd-logind[1441]: Removed session 20. Aug 13 07:20:01.795002 systemd[1]: Started sshd@23-10.128.0.37:22-139.178.68.195:47196.service - OpenSSH per-connection server daemon (139.178.68.195:47196). Aug 13 07:20:02.108243 sshd[6105]: Accepted publickey for core from 139.178.68.195 port 47196 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:20:02.111386 sshd[6105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:02.122184 systemd-logind[1441]: New session 21 of user core. Aug 13 07:20:02.131068 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:20:02.728871 sshd[6105]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:02.738817 systemd[1]: sshd@23-10.128.0.37:22-139.178.68.195:47196.service: Deactivated successfully. Aug 13 07:20:02.744172 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:20:02.747067 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:20:02.754117 systemd-logind[1441]: Removed session 21. Aug 13 07:20:02.791798 systemd[1]: Started sshd@24-10.128.0.37:22-139.178.68.195:47210.service - OpenSSH per-connection server daemon (139.178.68.195:47210). Aug 13 07:20:03.100265 sshd[6118]: Accepted publickey for core from 139.178.68.195 port 47210 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:20:03.102976 sshd[6118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:03.113802 systemd-logind[1441]: New session 22 of user core. Aug 13 07:20:03.116821 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:20:03.439365 sshd[6118]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:03.446305 systemd[1]: sshd@24-10.128.0.37:22-139.178.68.195:47210.service: Deactivated successfully. Aug 13 07:20:03.452237 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:20:03.456503 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:20:03.459041 systemd-logind[1441]: Removed session 22. Aug 13 07:20:08.497979 systemd[1]: Started sshd@25-10.128.0.37:22-139.178.68.195:47220.service - OpenSSH per-connection server daemon (139.178.68.195:47220). Aug 13 07:20:08.807586 sshd[6152]: Accepted publickey for core from 139.178.68.195 port 47220 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:20:08.809605 sshd[6152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:08.819051 systemd-logind[1441]: New session 23 of user core. Aug 13 07:20:08.825834 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:20:09.196880 sshd[6152]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:09.204275 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:20:09.205724 systemd[1]: sshd@25-10.128.0.37:22-139.178.68.195:47220.service: Deactivated successfully. Aug 13 07:20:09.213040 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:20:09.219501 systemd-logind[1441]: Removed session 23. Aug 13 07:20:14.254694 systemd[1]: Started sshd@26-10.128.0.37:22-139.178.68.195:38386.service - OpenSSH per-connection server daemon (139.178.68.195:38386). Aug 13 07:20:14.554551 sshd[6193]: Accepted publickey for core from 139.178.68.195 port 38386 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:20:14.557603 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:14.565805 systemd-logind[1441]: New session 24 of user core. Aug 13 07:20:14.573823 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:20:14.918723 sshd[6193]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:14.927567 systemd[1]: sshd@26-10.128.0.37:22-139.178.68.195:38386.service: Deactivated successfully. Aug 13 07:20:14.932132 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:20:14.934348 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:20:14.936453 systemd-logind[1441]: Removed session 24. Aug 13 07:20:19.978664 systemd[1]: Started sshd@27-10.128.0.37:22-139.178.68.195:60506.service - OpenSSH per-connection server daemon (139.178.68.195:60506). Aug 13 07:20:20.298653 sshd[6235]: Accepted publickey for core from 139.178.68.195 port 60506 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:20:20.300878 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:20.313865 systemd-logind[1441]: New session 25 of user core. Aug 13 07:20:20.325694 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:20:20.639151 sshd[6235]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:20.649227 systemd[1]: sshd@27-10.128.0.37:22-139.178.68.195:60506.service: Deactivated successfully. Aug 13 07:20:20.654840 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:20:20.657279 systemd-logind[1441]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:20:20.660254 systemd-logind[1441]: Removed session 25.