Aug 13 07:13:41.123468 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:13:41.123512 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:13:41.123531 kernel: BIOS-provided physical RAM map: Aug 13 07:13:41.123546 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Aug 13 07:13:41.123559 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Aug 13 07:13:41.123573 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Aug 13 07:13:41.123591 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Aug 13 07:13:41.123609 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Aug 13 07:13:41.123624 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Aug 13 07:13:41.123639 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Aug 13 07:13:41.123654 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Aug 13 07:13:41.123669 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Aug 13 07:13:41.123684 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Aug 13 07:13:41.123700 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Aug 13 07:13:41.123722 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Aug 13 07:13:41.123763 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Aug 13 07:13:41.123780 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Aug 13 07:13:41.123797 kernel: NX (Execute Disable) protection: active Aug 13 07:13:41.123822 kernel: APIC: Static calls initialized Aug 13 07:13:41.123839 kernel: efi: EFI v2.7 by EDK II Aug 13 07:13:41.123856 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Aug 13 07:13:41.123872 kernel: SMBIOS 2.4 present. Aug 13 07:13:41.123889 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Aug 13 07:13:41.123902 kernel: Hypervisor detected: KVM Aug 13 07:13:41.123921 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:13:41.123936 kernel: kvm-clock: using sched offset of 12614336278 cycles Aug 13 07:13:41.123952 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:13:41.123967 kernel: tsc: Detected 2299.998 MHz processor Aug 13 07:13:41.123981 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:13:41.123998 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:13:41.124014 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Aug 13 07:13:41.124030 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Aug 13 07:13:41.124047 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:13:41.124067 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Aug 13 07:13:41.124083 kernel: Using GB pages for direct mapping Aug 13 07:13:41.124099 kernel: Secure boot disabled Aug 13 07:13:41.124115 kernel: ACPI: Early table checksum verification disabled Aug 13 07:13:41.124131 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Aug 13 07:13:41.124147 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Aug 13 07:13:41.124164 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Aug 13 07:13:41.124187 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Aug 13 07:13:41.124208 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Aug 13 07:13:41.124224 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Aug 13 07:13:41.124242 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Aug 13 07:13:41.124259 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Aug 13 07:13:41.124276 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Aug 13 07:13:41.124293 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Aug 13 07:13:41.124314 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Aug 13 07:13:41.124331 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Aug 13 07:13:41.124348 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Aug 13 07:13:41.124365 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Aug 13 07:13:41.124382 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Aug 13 07:13:41.124399 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Aug 13 07:13:41.124416 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Aug 13 07:13:41.124433 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Aug 13 07:13:41.124450 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Aug 13 07:13:41.124471 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Aug 13 07:13:41.124488 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:13:41.124505 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:13:41.124522 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:13:41.124539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Aug 13 07:13:41.124556 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Aug 13 07:13:41.124573 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Aug 13 07:13:41.124591 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Aug 13 07:13:41.124608 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Aug 13 07:13:41.124628 kernel: Zone ranges: Aug 13 07:13:41.124646 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:13:41.124663 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 07:13:41.124679 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Aug 13 07:13:41.124697 kernel: Movable zone start for each node Aug 13 07:13:41.124714 kernel: Early memory node ranges Aug 13 07:13:41.124730 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Aug 13 07:13:41.125017 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Aug 13 07:13:41.125033 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Aug 13 07:13:41.125057 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Aug 13 07:13:41.125073 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Aug 13 07:13:41.125090 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Aug 13 07:13:41.125107 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:13:41.125124 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Aug 13 07:13:41.125142 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Aug 13 07:13:41.125161 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 13 07:13:41.125180 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Aug 13 07:13:41.125199 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 13 07:13:41.125218 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:13:41.125241 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:13:41.125260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:13:41.125278 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:13:41.125298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:13:41.125317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:13:41.125335 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:13:41.125354 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:13:41.125373 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 07:13:41.125396 kernel: Booting paravirtualized kernel on KVM Aug 13 07:13:41.125415 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:13:41.125433 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:13:41.125451 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:13:41.125474 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:13:41.125491 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:13:41.125544 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:13:41.125590 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:13:41.125610 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:13:41.125634 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:13:41.125652 kernel: random: crng init done Aug 13 07:13:41.125670 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 07:13:41.125688 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:13:41.125706 kernel: Fallback order for Node 0: 0 Aug 13 07:13:41.125730 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Aug 13 07:13:41.125774 kernel: Policy zone: Normal Aug 13 07:13:41.125793 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:13:41.125819 kernel: software IO TLB: area num 2. Aug 13 07:13:41.125842 kernel: Memory: 7513404K/7860584K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 346920K reserved, 0K cma-reserved) Aug 13 07:13:41.125860 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:13:41.125878 kernel: Kernel/User page tables isolation: enabled Aug 13 07:13:41.125896 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:13:41.125914 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:13:41.125932 kernel: Dynamic Preempt: voluntary Aug 13 07:13:41.125950 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:13:41.125970 kernel: rcu: RCU event tracing is enabled. Aug 13 07:13:41.126005 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:13:41.126025 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:13:41.126044 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:13:41.126067 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:13:41.126086 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:13:41.126105 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:13:41.126124 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 07:13:41.126143 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:13:41.126163 kernel: Console: colour dummy device 80x25 Aug 13 07:13:41.126186 kernel: printk: console [ttyS0] enabled Aug 13 07:13:41.126205 kernel: ACPI: Core revision 20230628 Aug 13 07:13:41.126224 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:13:41.126243 kernel: x2apic enabled Aug 13 07:13:41.126262 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:13:41.126282 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Aug 13 07:13:41.126301 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 07:13:41.126321 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Aug 13 07:13:41.126344 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Aug 13 07:13:41.126363 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Aug 13 07:13:41.126382 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:13:41.126401 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 07:13:41.126421 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 07:13:41.126440 kernel: Spectre V2 : Mitigation: IBRS Aug 13 07:13:41.126460 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:13:41.126478 kernel: RETBleed: Mitigation: IBRS Aug 13 07:13:41.126497 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:13:41.126519 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Aug 13 07:13:41.126539 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:13:41.126559 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:13:41.126578 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:13:41.126617 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:13:41.126648 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:13:41.126668 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:13:41.126687 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:13:41.126706 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:13:41.126729 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:13:41.126763 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:13:41.126782 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:13:41.126807 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:13:41.126825 kernel: landlock: Up and running. Aug 13 07:13:41.126845 kernel: SELinux: Initializing. Aug 13 07:13:41.126864 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:13:41.126883 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:13:41.126903 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Aug 13 07:13:41.126926 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:13:41.126945 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:13:41.126964 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:13:41.126983 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Aug 13 07:13:41.127003 kernel: signal: max sigframe size: 1776 Aug 13 07:13:41.127022 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:13:41.127041 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:13:41.127060 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:13:41.127079 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:13:41.127101 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:13:41.127120 kernel: .... node #0, CPUs: #1 Aug 13 07:13:41.127140 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 13 07:13:41.127160 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 07:13:41.127179 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:13:41.127197 kernel: smpboot: Max logical packages: 1 Aug 13 07:13:41.127217 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Aug 13 07:13:41.127235 kernel: devtmpfs: initialized Aug 13 07:13:41.127258 kernel: x86/mm: Memory block size: 128MB Aug 13 07:13:41.127277 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Aug 13 07:13:41.127297 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:13:41.127316 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:13:41.127335 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:13:41.127354 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:13:41.127373 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:13:41.127392 kernel: audit: type=2000 audit(1755069219.870:1): state=initialized audit_enabled=0 res=1 Aug 13 07:13:41.127410 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:13:41.127433 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:13:41.127452 kernel: cpuidle: using governor menu Aug 13 07:13:41.127471 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:13:41.127490 kernel: dca service started, version 1.12.1 Aug 13 07:13:41.127509 kernel: PCI: Using configuration type 1 for base access Aug 13 07:13:41.127528 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:13:41.127547 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:13:41.127566 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:13:41.127585 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:13:41.127607 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:13:41.127626 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:13:41.127645 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:13:41.127664 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:13:41.127683 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 13 07:13:41.127703 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:13:41.127721 kernel: ACPI: Interpreter enabled Aug 13 07:13:41.127752 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 07:13:41.127771 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:13:41.127795 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:13:41.127820 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 07:13:41.127839 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 13 07:13:41.127858 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:13:41.128116 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:13:41.128310 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 07:13:41.128489 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 07:13:41.128517 kernel: PCI host bridge to bus 0000:00 Aug 13 07:13:41.128696 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:13:41.128936 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:13:41.129151 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:13:41.129331 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Aug 13 07:13:41.129500 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:13:41.129707 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 07:13:41.129965 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Aug 13 07:13:41.130166 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 07:13:41.130347 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 13 07:13:41.130535 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Aug 13 07:13:41.130715 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 13 07:13:41.133181 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Aug 13 07:13:41.133396 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:13:41.133605 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Aug 13 07:13:41.133847 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Aug 13 07:13:41.134051 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:13:41.134243 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Aug 13 07:13:41.134435 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Aug 13 07:13:41.134460 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:13:41.134488 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:13:41.134507 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:13:41.134527 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:13:41.134546 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 07:13:41.134566 kernel: iommu: Default domain type: Translated Aug 13 07:13:41.134586 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:13:41.134606 kernel: efivars: Registered efivars operations Aug 13 07:13:41.134624 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:13:41.134644 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:13:41.134667 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Aug 13 07:13:41.134686 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Aug 13 07:13:41.134704 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Aug 13 07:13:41.134723 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Aug 13 07:13:41.136385 kernel: vgaarb: loaded Aug 13 07:13:41.136411 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:13:41.136430 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:13:41.136450 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:13:41.136469 kernel: pnp: PnP ACPI init Aug 13 07:13:41.136488 kernel: pnp: PnP ACPI: found 7 devices Aug 13 07:13:41.136517 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:13:41.136534 kernel: NET: Registered PF_INET protocol family Aug 13 07:13:41.136553 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:13:41.136571 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 07:13:41.136589 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:13:41.136607 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:13:41.136626 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 07:13:41.136647 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 07:13:41.136667 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:13:41.136690 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:13:41.137775 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:13:41.137813 kernel: NET: Registered PF_XDP protocol family Aug 13 07:13:41.138022 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:13:41.138209 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:13:41.138390 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:13:41.138570 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Aug 13 07:13:41.138828 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:13:41.138864 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:13:41.138884 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 07:13:41.138905 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Aug 13 07:13:41.138925 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:13:41.138945 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 07:13:41.138965 kernel: clocksource: Switched to clocksource tsc Aug 13 07:13:41.138983 kernel: Initialise system trusted keyrings Aug 13 07:13:41.139002 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 07:13:41.139026 kernel: Key type asymmetric registered Aug 13 07:13:41.139045 kernel: Asymmetric key parser 'x509' registered Aug 13 07:13:41.139063 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:13:41.139082 kernel: io scheduler mq-deadline registered Aug 13 07:13:41.139102 kernel: io scheduler kyber registered Aug 13 07:13:41.139121 kernel: io scheduler bfq registered Aug 13 07:13:41.139140 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:13:41.139161 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 07:13:41.139358 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Aug 13 07:13:41.139389 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 13 07:13:41.139580 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Aug 13 07:13:41.139606 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 07:13:41.142867 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Aug 13 07:13:41.142899 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:13:41.142921 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:13:41.142941 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 07:13:41.142961 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Aug 13 07:13:41.142987 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Aug 13 07:13:41.143191 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Aug 13 07:13:41.143219 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:13:41.143239 kernel: i8042: Warning: Keylock active Aug 13 07:13:41.143259 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:13:41.143278 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:13:41.143467 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 13 07:13:41.143644 kernel: rtc_cmos 00:00: registered as rtc0 Aug 13 07:13:41.143887 kernel: rtc_cmos 00:00: setting system clock to 2025-08-13T07:13:40 UTC (1755069220) Aug 13 07:13:41.144078 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 13 07:13:41.144104 kernel: intel_pstate: CPU model not supported Aug 13 07:13:41.144123 kernel: pstore: Using crash dump compression: deflate Aug 13 07:13:41.144142 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:13:41.144162 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:13:41.144181 kernel: Segment Routing with IPv6 Aug 13 07:13:41.144201 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:13:41.144227 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:13:41.144245 kernel: Key type dns_resolver registered Aug 13 07:13:41.144262 kernel: IPI shorthand broadcast: enabled Aug 13 07:13:41.144286 kernel: sched_clock: Marking stable (880004901, 143303224)->(1071755090, -48446965) Aug 13 07:13:41.144306 kernel: registered taskstats version 1 Aug 13 07:13:41.144324 kernel: Loading compiled-in X.509 certificates Aug 13 07:13:41.144342 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:13:41.144361 kernel: Key type .fscrypt registered Aug 13 07:13:41.144379 kernel: Key type fscrypt-provisioning registered Aug 13 07:13:41.144403 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:13:41.144421 kernel: ima: No architecture policies found Aug 13 07:13:41.144440 kernel: clk: Disabling unused clocks Aug 13 07:13:41.144458 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:13:41.144476 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:13:41.144495 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:13:41.144513 kernel: Run /init as init process Aug 13 07:13:41.144532 kernel: with arguments: Aug 13 07:13:41.144552 kernel: /init Aug 13 07:13:41.144576 kernel: with environment: Aug 13 07:13:41.144595 kernel: HOME=/ Aug 13 07:13:41.144615 kernel: TERM=linux Aug 13 07:13:41.144635 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:13:41.144655 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:13:41.144678 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:13:41.144703 systemd[1]: Detected virtualization google. Aug 13 07:13:41.144728 systemd[1]: Detected architecture x86-64. Aug 13 07:13:41.146783 systemd[1]: Running in initrd. Aug 13 07:13:41.146811 systemd[1]: No hostname configured, using default hostname. Aug 13 07:13:41.146841 systemd[1]: Hostname set to . Aug 13 07:13:41.146858 systemd[1]: Initializing machine ID from random generator. Aug 13 07:13:41.146875 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:13:41.146895 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:13:41.146915 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:13:41.146941 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:13:41.146959 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:13:41.146975 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:13:41.146992 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:13:41.147012 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:13:41.147032 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:13:41.147052 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:13:41.147078 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:13:41.147099 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:13:41.147139 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:13:41.147163 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:13:41.147183 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:13:41.147203 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:13:41.147223 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:13:41.147247 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:13:41.147268 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:13:41.147289 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:13:41.147310 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:13:41.147331 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:13:41.147351 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:13:41.147371 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:13:41.147391 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:13:41.147415 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:13:41.147436 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:13:41.147455 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:13:41.147476 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:13:41.147497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:13:41.147518 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:13:41.147538 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:13:41.147593 systemd-journald[183]: Collecting audit messages is disabled. Aug 13 07:13:41.147642 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:13:41.147664 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:13:41.147688 systemd-journald[183]: Journal started Aug 13 07:13:41.147728 systemd-journald[183]: Runtime Journal (/run/log/journal/fe724fcf456144b391bf03268609e878) is 8.0M, max 148.7M, 140.7M free. Aug 13 07:13:41.129176 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 07:13:41.167747 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:13:41.169595 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:13:41.178755 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:13:41.185163 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 07:13:41.188918 kernel: Bridge firewalling registered Aug 13 07:13:41.187978 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:13:41.190656 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:13:41.196715 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:13:41.197804 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:13:41.208972 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:13:41.216974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:13:41.217942 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:13:41.232612 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:13:41.244041 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:13:41.247106 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:13:41.256044 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:13:41.266994 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:13:41.295382 dracut-cmdline[218]: dracut-dracut-053 Aug 13 07:13:41.298684 systemd-resolved[214]: Positive Trust Anchors: Aug 13 07:13:41.298720 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:13:41.307726 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:13:41.298806 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:13:41.303649 systemd-resolved[214]: Defaulting to hostname 'linux'. Aug 13 07:13:41.305504 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:13:41.312006 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:13:41.402782 kernel: SCSI subsystem initialized Aug 13 07:13:41.414783 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:13:41.426795 kernel: iscsi: registered transport (tcp) Aug 13 07:13:41.451790 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:13:41.451879 kernel: QLogic iSCSI HBA Driver Aug 13 07:13:41.504416 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:13:41.514980 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:13:41.544842 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:13:41.544930 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:13:41.544955 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:13:41.589777 kernel: raid6: avx2x4 gen() 18106 MB/s Aug 13 07:13:41.606773 kernel: raid6: avx2x2 gen() 18051 MB/s Aug 13 07:13:41.624142 kernel: raid6: avx2x1 gen() 14008 MB/s Aug 13 07:13:41.624183 kernel: raid6: using algorithm avx2x4 gen() 18106 MB/s Aug 13 07:13:41.642154 kernel: raid6: .... xor() 6739 MB/s, rmw enabled Aug 13 07:13:41.642213 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:13:41.665785 kernel: xor: automatically using best checksumming function avx Aug 13 07:13:41.839776 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:13:41.853167 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:13:41.859008 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:13:41.893823 systemd-udevd[400]: Using default interface naming scheme 'v255'. Aug 13 07:13:41.901375 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:13:41.914181 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:13:41.944240 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Aug 13 07:13:41.981899 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:13:41.988050 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:13:42.081482 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:13:42.090983 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:13:42.133274 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:13:42.145408 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:13:42.150276 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:13:42.161897 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:13:42.171332 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:13:42.204344 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:13:42.230017 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:13:42.252784 kernel: scsi host0: Virtio SCSI HBA Aug 13 07:13:42.279773 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Aug 13 07:13:42.279876 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:13:42.304761 kernel: AES CTR mode by8 optimization enabled Aug 13 07:13:42.320314 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:13:42.320524 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:13:42.334957 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:13:42.342822 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:13:42.343064 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:13:42.345927 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:13:42.357543 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:13:42.370439 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Aug 13 07:13:42.370797 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Aug 13 07:13:42.373922 kernel: sd 0:0:1:0: [sda] Write Protect is off Aug 13 07:13:42.374244 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Aug 13 07:13:42.374494 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 07:13:42.387459 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:13:42.387535 kernel: GPT:17805311 != 25165823 Aug 13 07:13:42.387561 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:13:42.387586 kernel: GPT:17805311 != 25165823 Aug 13 07:13:42.387612 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:13:42.387636 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:13:42.387660 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Aug 13 07:13:42.390144 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:13:42.404031 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:13:42.442010 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (472) Aug 13 07:13:42.446129 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (444) Aug 13 07:13:42.457003 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Aug 13 07:13:42.461217 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:13:42.496652 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Aug 13 07:13:42.503266 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 13 07:13:42.532408 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Aug 13 07:13:42.548884 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Aug 13 07:13:42.575989 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:13:42.622421 disk-uuid[549]: Primary Header is updated. Aug 13 07:13:42.622421 disk-uuid[549]: Secondary Entries is updated. Aug 13 07:13:42.622421 disk-uuid[549]: Secondary Header is updated. Aug 13 07:13:42.648909 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:13:42.669779 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:13:42.692760 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:13:43.689234 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:13:43.689313 disk-uuid[550]: The operation has completed successfully. Aug 13 07:13:43.768328 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:13:43.768481 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:13:43.793955 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:13:43.827392 sh[567]: Success Aug 13 07:13:43.850030 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:13:43.939670 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:13:43.965885 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:13:43.970093 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:13:44.036474 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:13:44.036567 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:13:44.036594 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:13:44.052752 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:13:44.052837 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:13:44.087770 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 07:13:44.094316 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:13:44.095329 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:13:44.104947 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:13:44.139850 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:13:44.173826 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:13:44.173912 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:13:44.173941 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:13:44.197786 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:13:44.197864 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:13:44.210156 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:13:44.227943 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:13:44.236136 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:13:44.253978 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:13:44.352610 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:13:44.364127 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:13:44.453566 systemd-networkd[750]: lo: Link UP Aug 13 07:13:44.453989 systemd-networkd[750]: lo: Gained carrier Aug 13 07:13:44.456667 systemd-networkd[750]: Enumeration completed Aug 13 07:13:44.456849 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:13:44.457573 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:13:44.478700 ignition[656]: Ignition 2.19.0 Aug 13 07:13:44.457581 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:13:44.478711 ignition[656]: Stage: fetch-offline Aug 13 07:13:44.460134 systemd-networkd[750]: eth0: Link UP Aug 13 07:13:44.478797 ignition[656]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:13:44.460141 systemd-networkd[750]: eth0: Gained carrier Aug 13 07:13:44.478822 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:13:44.460154 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:13:44.479009 ignition[656]: parsed url from cmdline: "" Aug 13 07:13:44.471835 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.36/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 13 07:13:44.479021 ignition[656]: no config URL provided Aug 13 07:13:44.494273 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:13:44.479032 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:13:44.523638 systemd[1]: Reached target network.target - Network. Aug 13 07:13:44.479052 ignition[656]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:13:44.544031 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:13:44.479070 ignition[656]: failed to fetch config: resource requires networking Aug 13 07:13:44.580209 unknown[758]: fetched base config from "system" Aug 13 07:13:44.479397 ignition[656]: Ignition finished successfully Aug 13 07:13:44.580222 unknown[758]: fetched base config from "system" Aug 13 07:13:44.569008 ignition[758]: Ignition 2.19.0 Aug 13 07:13:44.580233 unknown[758]: fetched user config from "gcp" Aug 13 07:13:44.569018 ignition[758]: Stage: fetch Aug 13 07:13:44.601353 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:13:44.569216 ignition[758]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:13:44.635996 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:13:44.569229 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:13:44.684822 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:13:44.569355 ignition[758]: parsed url from cmdline: "" Aug 13 07:13:44.707969 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:13:44.569363 ignition[758]: no config URL provided Aug 13 07:13:44.757832 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:13:44.569371 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:13:44.772262 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:13:44.569389 ignition[758]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:13:44.788907 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:13:44.569412 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Aug 13 07:13:44.806924 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:13:44.573177 ignition[758]: GET result: OK Aug 13 07:13:44.821912 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:13:44.573281 ignition[758]: parsing config with SHA512: 2e4bf7c169be4239068185235135e31af24bbcf487da30b70aaa254bcaaa547e0b6164b29d4c4dcff03db82ee49174ce4c0ec566431a0f8813b28649a18e7c46 Aug 13 07:13:44.838922 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:13:44.581542 ignition[758]: fetch: fetch complete Aug 13 07:13:44.862989 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:13:44.581553 ignition[758]: fetch: fetch passed Aug 13 07:13:44.581633 ignition[758]: Ignition finished successfully Aug 13 07:13:44.682257 ignition[764]: Ignition 2.19.0 Aug 13 07:13:44.682267 ignition[764]: Stage: kargs Aug 13 07:13:44.682461 ignition[764]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:13:44.682476 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:13:44.683503 ignition[764]: kargs: kargs passed Aug 13 07:13:44.683563 ignition[764]: Ignition finished successfully Aug 13 07:13:44.754538 ignition[771]: Ignition 2.19.0 Aug 13 07:13:44.754546 ignition[771]: Stage: disks Aug 13 07:13:44.754790 ignition[771]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:13:44.754809 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:13:44.756074 ignition[771]: disks: disks passed Aug 13 07:13:44.756131 ignition[771]: Ignition finished successfully Aug 13 07:13:44.908048 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 13 07:13:45.060492 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:13:45.065057 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:13:45.210780 kernel: EXT4-fs (sda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:13:45.211256 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:13:45.212158 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:13:45.234994 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:13:45.264881 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:13:45.308702 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (787) Aug 13 07:13:45.308772 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:13:45.308802 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:13:45.308845 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:13:45.265681 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:13:45.348927 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:13:45.348976 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:13:45.265777 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:13:45.265817 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:13:45.341651 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:13:45.368816 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:13:45.397989 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:13:45.541606 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:13:45.552786 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:13:45.562880 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:13:45.572889 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:13:45.711513 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:13:45.739919 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:13:45.767927 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:13:45.762959 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:13:45.786174 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:13:45.815057 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:13:45.820125 ignition[899]: INFO : Ignition 2.19.0 Aug 13 07:13:45.820125 ignition[899]: INFO : Stage: mount Aug 13 07:13:45.852900 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:13:45.852900 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:13:45.852900 ignition[899]: INFO : mount: mount passed Aug 13 07:13:45.852900 ignition[899]: INFO : Ignition finished successfully Aug 13 07:13:45.831420 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:13:45.844904 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:13:45.845220 systemd-networkd[750]: eth0: Gained IPv6LL Aug 13 07:13:46.217001 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:13:46.263767 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (911) Aug 13 07:13:46.281536 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:13:46.281629 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:13:46.281656 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:13:46.304101 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:13:46.304188 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:13:46.307541 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:13:46.346706 ignition[928]: INFO : Ignition 2.19.0 Aug 13 07:13:46.346706 ignition[928]: INFO : Stage: files Aug 13 07:13:46.360932 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:13:46.360932 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:13:46.360932 ignition[928]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:13:46.360932 ignition[928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:13:46.360932 ignition[928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:13:46.360932 ignition[928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:13:46.360932 ignition[928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:13:46.360932 ignition[928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:13:46.356979 unknown[928]: wrote ssh authorized keys file for user: core Aug 13 07:13:46.462925 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:13:46.462925 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 07:13:46.496909 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:13:46.855694 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:13:46.855694 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:13:47.258992 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:13:47.652381 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:13:47.652381 ignition[928]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 07:13:47.691934 ignition[928]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:13:47.691934 ignition[928]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:13:47.691934 ignition[928]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 07:13:47.691934 ignition[928]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:13:47.691934 ignition[928]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:13:47.691934 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:13:47.691934 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:13:47.691934 ignition[928]: INFO : files: files passed Aug 13 07:13:47.691934 ignition[928]: INFO : Ignition finished successfully Aug 13 07:13:47.658327 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:13:47.687955 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:13:47.713973 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:13:47.768256 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:13:47.905930 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:13:47.905930 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:13:47.768396 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:13:47.955061 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:13:47.792280 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:13:47.818227 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:13:47.848990 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:13:47.933519 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:13:47.933645 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:13:47.945715 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:13:47.964951 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:13:47.986065 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:13:47.992950 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:13:48.057642 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:13:48.073002 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:13:48.118078 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:13:48.131138 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:13:48.153177 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:13:48.171170 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:13:48.171391 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:13:48.200181 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:13:48.219106 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:13:48.238183 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:13:48.258167 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:13:48.277093 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:13:48.298212 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:13:48.318131 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:13:48.339206 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:13:48.359227 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:13:48.380160 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:13:48.398113 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:13:48.398316 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:13:48.426212 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:13:48.445162 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:13:48.466017 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:13:48.466188 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:13:48.487149 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:13:48.487370 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:13:48.516175 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:13:48.516413 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:13:48.540210 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:13:48.604932 ignition[981]: INFO : Ignition 2.19.0 Aug 13 07:13:48.604932 ignition[981]: INFO : Stage: umount Aug 13 07:13:48.604932 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:13:48.604932 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:13:48.604932 ignition[981]: INFO : umount: umount passed Aug 13 07:13:48.604932 ignition[981]: INFO : Ignition finished successfully Aug 13 07:13:48.540415 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:13:48.565058 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:13:48.601161 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:13:48.613052 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:13:48.613294 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:13:48.662207 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:13:48.662504 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:13:48.693708 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:13:48.694825 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:13:48.694948 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:13:48.710636 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:13:48.710777 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:13:48.720261 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:13:48.720394 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:13:48.748219 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:13:48.748285 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:13:48.756188 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:13:48.756253 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:13:48.773149 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:13:48.773217 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:13:48.805106 systemd[1]: Stopped target network.target - Network. Aug 13 07:13:48.813101 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:13:48.813188 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:13:48.846137 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:13:48.861897 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:13:48.867846 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:13:48.881017 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:13:48.892125 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:13:48.922118 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:13:48.922194 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:13:48.930202 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:13:48.930270 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:13:48.947189 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:13:48.947268 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:13:48.964205 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:13:48.964287 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:13:48.981187 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:13:48.981271 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:13:48.998458 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:13:49.002849 systemd-networkd[750]: eth0: DHCPv6 lease lost Aug 13 07:13:49.026162 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:13:49.046412 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:13:49.046551 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:13:49.065447 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:13:49.065842 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:13:49.076075 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:13:49.076135 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:13:49.101928 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:13:49.128855 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:13:49.128975 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:13:49.147993 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:13:49.148096 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:13:49.165997 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:13:49.166091 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:13:49.183996 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:13:49.184114 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:13:49.205144 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:13:49.213589 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:13:49.213787 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:13:49.236612 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:13:49.236731 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:13:49.267973 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:13:49.653906 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 07:13:49.268061 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:13:49.287952 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:13:49.288059 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:13:49.317900 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:13:49.318055 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:13:49.343147 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:13:49.343241 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:13:49.378988 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:13:49.381083 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:13:49.381162 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:13:49.428133 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:13:49.428221 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:13:49.449086 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:13:49.449165 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:13:49.480134 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:13:49.480231 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:13:49.499636 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:13:49.499812 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:13:49.519346 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:13:49.519468 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:13:49.541255 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:13:49.567963 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:13:49.604038 systemd[1]: Switching root. Aug 13 07:13:49.882884 systemd-journald[183]: Journal stopped Aug 13 07:13:41.123468 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:13:41.123512 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:13:41.123531 kernel: BIOS-provided physical RAM map: Aug 13 07:13:41.123546 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Aug 13 07:13:41.123559 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Aug 13 07:13:41.123573 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Aug 13 07:13:41.123591 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Aug 13 07:13:41.123609 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Aug 13 07:13:41.123624 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Aug 13 07:13:41.123639 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Aug 13 07:13:41.123654 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Aug 13 07:13:41.123669 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Aug 13 07:13:41.123684 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Aug 13 07:13:41.123700 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Aug 13 07:13:41.123722 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Aug 13 07:13:41.123763 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Aug 13 07:13:41.123780 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Aug 13 07:13:41.123797 kernel: NX (Execute Disable) protection: active Aug 13 07:13:41.123822 kernel: APIC: Static calls initialized Aug 13 07:13:41.123839 kernel: efi: EFI v2.7 by EDK II Aug 13 07:13:41.123856 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Aug 13 07:13:41.123872 kernel: SMBIOS 2.4 present. Aug 13 07:13:41.123889 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Aug 13 07:13:41.123902 kernel: Hypervisor detected: KVM Aug 13 07:13:41.123921 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:13:41.123936 kernel: kvm-clock: using sched offset of 12614336278 cycles Aug 13 07:13:41.123952 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:13:41.123967 kernel: tsc: Detected 2299.998 MHz processor Aug 13 07:13:41.123981 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:13:41.123998 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:13:41.124014 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Aug 13 07:13:41.124030 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Aug 13 07:13:41.124047 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:13:41.124067 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Aug 13 07:13:41.124083 kernel: Using GB pages for direct mapping Aug 13 07:13:41.124099 kernel: Secure boot disabled Aug 13 07:13:41.124115 kernel: ACPI: Early table checksum verification disabled Aug 13 07:13:41.124131 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Aug 13 07:13:41.124147 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Aug 13 07:13:41.124164 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Aug 13 07:13:41.124187 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Aug 13 07:13:41.124208 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Aug 13 07:13:41.124224 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Aug 13 07:13:41.124242 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Aug 13 07:13:41.124259 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Aug 13 07:13:41.124276 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Aug 13 07:13:41.124293 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Aug 13 07:13:41.124314 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Aug 13 07:13:41.124331 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Aug 13 07:13:41.124348 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Aug 13 07:13:41.124365 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Aug 13 07:13:41.124382 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Aug 13 07:13:41.124399 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Aug 13 07:13:41.124416 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Aug 13 07:13:41.124433 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Aug 13 07:13:41.124450 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Aug 13 07:13:41.124471 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Aug 13 07:13:41.124488 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:13:41.124505 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:13:41.124522 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:13:41.124539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Aug 13 07:13:41.124556 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Aug 13 07:13:41.124573 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Aug 13 07:13:41.124591 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Aug 13 07:13:41.124608 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Aug 13 07:13:41.124628 kernel: Zone ranges: Aug 13 07:13:41.124646 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:13:41.124663 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 07:13:41.124679 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Aug 13 07:13:41.124697 kernel: Movable zone start for each node Aug 13 07:13:41.124714 kernel: Early memory node ranges Aug 13 07:13:41.124730 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Aug 13 07:13:41.125017 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Aug 13 07:13:41.125033 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Aug 13 07:13:41.125057 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Aug 13 07:13:41.125073 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Aug 13 07:13:41.125090 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Aug 13 07:13:41.125107 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:13:41.125124 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Aug 13 07:13:41.125142 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Aug 13 07:13:41.125161 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 13 07:13:41.125180 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Aug 13 07:13:41.125199 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 13 07:13:41.125218 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:13:41.125241 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:13:41.125260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:13:41.125278 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:13:41.125298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:13:41.125317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:13:41.125335 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:13:41.125354 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:13:41.125373 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 07:13:41.125396 kernel: Booting paravirtualized kernel on KVM Aug 13 07:13:41.125415 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:13:41.125433 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:13:41.125451 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:13:41.125474 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:13:41.125491 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:13:41.125544 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:13:41.125590 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:13:41.125610 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:13:41.125634 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:13:41.125652 kernel: random: crng init done Aug 13 07:13:41.125670 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 07:13:41.125688 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:13:41.125706 kernel: Fallback order for Node 0: 0 Aug 13 07:13:41.125730 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Aug 13 07:13:41.125774 kernel: Policy zone: Normal Aug 13 07:13:41.125793 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:13:41.125819 kernel: software IO TLB: area num 2. Aug 13 07:13:41.125842 kernel: Memory: 7513404K/7860584K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 346920K reserved, 0K cma-reserved) Aug 13 07:13:41.125860 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:13:41.125878 kernel: Kernel/User page tables isolation: enabled Aug 13 07:13:41.125896 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:13:41.125914 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:13:41.125932 kernel: Dynamic Preempt: voluntary Aug 13 07:13:41.125950 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:13:41.125970 kernel: rcu: RCU event tracing is enabled. Aug 13 07:13:41.126005 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:13:41.126025 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:13:41.126044 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:13:41.126067 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:13:41.126086 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:13:41.126105 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:13:41.126124 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 07:13:41.126143 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:13:41.126163 kernel: Console: colour dummy device 80x25 Aug 13 07:13:41.126186 kernel: printk: console [ttyS0] enabled Aug 13 07:13:41.126205 kernel: ACPI: Core revision 20230628 Aug 13 07:13:41.126224 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:13:41.126243 kernel: x2apic enabled Aug 13 07:13:41.126262 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:13:41.126282 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Aug 13 07:13:41.126301 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 07:13:41.126321 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Aug 13 07:13:41.126344 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Aug 13 07:13:41.126363 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Aug 13 07:13:41.126382 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:13:41.126401 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 07:13:41.126421 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 07:13:41.126440 kernel: Spectre V2 : Mitigation: IBRS Aug 13 07:13:41.126460 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:13:41.126478 kernel: RETBleed: Mitigation: IBRS Aug 13 07:13:41.126497 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:13:41.126519 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Aug 13 07:13:41.126539 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:13:41.126559 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:13:41.126578 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:13:41.126617 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:13:41.126648 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:13:41.126668 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:13:41.126687 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:13:41.126706 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:13:41.126729 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:13:41.126763 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:13:41.126782 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:13:41.126807 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:13:41.126825 kernel: landlock: Up and running. Aug 13 07:13:41.126845 kernel: SELinux: Initializing. Aug 13 07:13:41.126864 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:13:41.126883 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:13:41.126903 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Aug 13 07:13:41.126926 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:13:41.126945 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:13:41.126964 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:13:41.126983 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Aug 13 07:13:41.127003 kernel: signal: max sigframe size: 1776 Aug 13 07:13:41.127022 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:13:41.127041 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:13:41.127060 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:13:41.127079 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:13:41.127101 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:13:41.127120 kernel: .... node #0, CPUs: #1 Aug 13 07:13:41.127140 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 13 07:13:41.127160 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 07:13:41.127179 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:13:41.127197 kernel: smpboot: Max logical packages: 1 Aug 13 07:13:41.127217 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Aug 13 07:13:41.127235 kernel: devtmpfs: initialized Aug 13 07:13:41.127258 kernel: x86/mm: Memory block size: 128MB Aug 13 07:13:41.127277 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Aug 13 07:13:41.127297 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:13:41.127316 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:13:41.127335 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:13:41.127354 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:13:41.127373 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:13:41.127392 kernel: audit: type=2000 audit(1755069219.870:1): state=initialized audit_enabled=0 res=1 Aug 13 07:13:41.127410 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:13:41.127433 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:13:41.127452 kernel: cpuidle: using governor menu Aug 13 07:13:41.127471 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:13:41.127490 kernel: dca service started, version 1.12.1 Aug 13 07:13:41.127509 kernel: PCI: Using configuration type 1 for base access Aug 13 07:13:41.127528 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:13:41.127547 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:13:41.127566 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:13:41.127585 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:13:41.127607 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:13:41.127626 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:13:41.127645 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:13:41.127664 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:13:41.127683 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 13 07:13:41.127703 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:13:41.127721 kernel: ACPI: Interpreter enabled Aug 13 07:13:41.127752 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 07:13:41.127771 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:13:41.127795 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:13:41.127820 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 07:13:41.127839 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 13 07:13:41.127858 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:13:41.128116 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:13:41.128310 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 07:13:41.128489 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 07:13:41.128517 kernel: PCI host bridge to bus 0000:00 Aug 13 07:13:41.128696 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:13:41.128936 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:13:41.129151 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:13:41.129331 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Aug 13 07:13:41.129500 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:13:41.129707 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 07:13:41.129965 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Aug 13 07:13:41.130166 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 07:13:41.130347 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 13 07:13:41.130535 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Aug 13 07:13:41.130715 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 13 07:13:41.133181 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Aug 13 07:13:41.133396 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:13:41.133605 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Aug 13 07:13:41.133847 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Aug 13 07:13:41.134051 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:13:41.134243 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Aug 13 07:13:41.134435 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Aug 13 07:13:41.134460 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:13:41.134488 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:13:41.134507 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:13:41.134527 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:13:41.134546 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 07:13:41.134566 kernel: iommu: Default domain type: Translated Aug 13 07:13:41.134586 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:13:41.134606 kernel: efivars: Registered efivars operations Aug 13 07:13:41.134624 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:13:41.134644 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:13:41.134667 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Aug 13 07:13:41.134686 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Aug 13 07:13:41.134704 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Aug 13 07:13:41.134723 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Aug 13 07:13:41.136385 kernel: vgaarb: loaded Aug 13 07:13:41.136411 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:13:41.136430 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:13:41.136450 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:13:41.136469 kernel: pnp: PnP ACPI init Aug 13 07:13:41.136488 kernel: pnp: PnP ACPI: found 7 devices Aug 13 07:13:41.136517 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:13:41.136534 kernel: NET: Registered PF_INET protocol family Aug 13 07:13:41.136553 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:13:41.136571 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 07:13:41.136589 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:13:41.136607 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:13:41.136626 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 07:13:41.136647 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 07:13:41.136667 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:13:41.136690 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:13:41.137775 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:13:41.137813 kernel: NET: Registered PF_XDP protocol family Aug 13 07:13:41.138022 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:13:41.138209 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:13:41.138390 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:13:41.138570 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Aug 13 07:13:41.138828 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:13:41.138864 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:13:41.138884 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 07:13:41.138905 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Aug 13 07:13:41.138925 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:13:41.138945 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 07:13:41.138965 kernel: clocksource: Switched to clocksource tsc Aug 13 07:13:41.138983 kernel: Initialise system trusted keyrings Aug 13 07:13:41.139002 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 07:13:41.139026 kernel: Key type asymmetric registered Aug 13 07:13:41.139045 kernel: Asymmetric key parser 'x509' registered Aug 13 07:13:41.139063 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:13:41.139082 kernel: io scheduler mq-deadline registered Aug 13 07:13:41.139102 kernel: io scheduler kyber registered Aug 13 07:13:41.139121 kernel: io scheduler bfq registered Aug 13 07:13:41.139140 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:13:41.139161 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 07:13:41.139358 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Aug 13 07:13:41.139389 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 13 07:13:41.139580 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Aug 13 07:13:41.139606 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 07:13:41.142867 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Aug 13 07:13:41.142899 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:13:41.142921 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:13:41.142941 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 07:13:41.142961 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Aug 13 07:13:41.142987 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Aug 13 07:13:41.143191 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Aug 13 07:13:41.143219 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:13:41.143239 kernel: i8042: Warning: Keylock active Aug 13 07:13:41.143259 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:13:41.143278 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:13:41.143467 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 13 07:13:41.143644 kernel: rtc_cmos 00:00: registered as rtc0 Aug 13 07:13:41.143887 kernel: rtc_cmos 00:00: setting system clock to 2025-08-13T07:13:40 UTC (1755069220) Aug 13 07:13:41.144078 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 13 07:13:41.144104 kernel: intel_pstate: CPU model not supported Aug 13 07:13:41.144123 kernel: pstore: Using crash dump compression: deflate Aug 13 07:13:41.144142 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:13:41.144162 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:13:41.144181 kernel: Segment Routing with IPv6 Aug 13 07:13:41.144201 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:13:41.144227 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:13:41.144245 kernel: Key type dns_resolver registered Aug 13 07:13:41.144262 kernel: IPI shorthand broadcast: enabled Aug 13 07:13:41.144286 kernel: sched_clock: Marking stable (880004901, 143303224)->(1071755090, -48446965) Aug 13 07:13:41.144306 kernel: registered taskstats version 1 Aug 13 07:13:41.144324 kernel: Loading compiled-in X.509 certificates Aug 13 07:13:41.144342 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:13:41.144361 kernel: Key type .fscrypt registered Aug 13 07:13:41.144379 kernel: Key type fscrypt-provisioning registered Aug 13 07:13:41.144403 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:13:41.144421 kernel: ima: No architecture policies found Aug 13 07:13:41.144440 kernel: clk: Disabling unused clocks Aug 13 07:13:41.144458 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:13:41.144476 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:13:41.144495 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:13:41.144513 kernel: Run /init as init process Aug 13 07:13:41.144532 kernel: with arguments: Aug 13 07:13:41.144552 kernel: /init Aug 13 07:13:41.144576 kernel: with environment: Aug 13 07:13:41.144595 kernel: HOME=/ Aug 13 07:13:41.144615 kernel: TERM=linux Aug 13 07:13:41.144635 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:13:41.144655 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:13:41.144678 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:13:41.144703 systemd[1]: Detected virtualization google. Aug 13 07:13:41.144728 systemd[1]: Detected architecture x86-64. Aug 13 07:13:41.146783 systemd[1]: Running in initrd. Aug 13 07:13:41.146811 systemd[1]: No hostname configured, using default hostname. Aug 13 07:13:41.146841 systemd[1]: Hostname set to . Aug 13 07:13:41.146858 systemd[1]: Initializing machine ID from random generator. Aug 13 07:13:41.146875 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:13:41.146895 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:13:41.146915 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:13:41.146941 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:13:41.146959 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:13:41.146975 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:13:41.146992 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:13:41.147012 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:13:41.147032 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:13:41.147052 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:13:41.147078 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:13:41.147099 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:13:41.147139 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:13:41.147163 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:13:41.147183 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:13:41.147203 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:13:41.147223 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:13:41.147247 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:13:41.147268 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:13:41.147289 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:13:41.147310 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:13:41.147331 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:13:41.147351 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:13:41.147371 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:13:41.147391 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:13:41.147415 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:13:41.147436 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:13:41.147455 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:13:41.147476 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:13:41.147497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:13:41.147518 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:13:41.147538 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:13:41.147593 systemd-journald[183]: Collecting audit messages is disabled. Aug 13 07:13:41.147642 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:13:41.147664 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:13:41.147688 systemd-journald[183]: Journal started Aug 13 07:13:41.147728 systemd-journald[183]: Runtime Journal (/run/log/journal/fe724fcf456144b391bf03268609e878) is 8.0M, max 148.7M, 140.7M free. Aug 13 07:13:41.129176 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 07:13:41.167747 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:13:41.169595 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:13:41.178755 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:13:41.185163 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 07:13:41.188918 kernel: Bridge firewalling registered Aug 13 07:13:41.187978 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:13:41.190656 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:13:41.196715 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:13:41.197804 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:13:41.208972 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:13:41.216974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:13:41.217942 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:13:41.232612 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:13:41.244041 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:13:41.247106 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:13:41.256044 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:13:41.266994 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:13:41.295382 dracut-cmdline[218]: dracut-dracut-053 Aug 13 07:13:41.298684 systemd-resolved[214]: Positive Trust Anchors: Aug 13 07:13:41.298720 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:13:41.307726 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:13:41.298806 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:13:41.303649 systemd-resolved[214]: Defaulting to hostname 'linux'. Aug 13 07:13:41.305504 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:13:41.312006 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:13:41.402782 kernel: SCSI subsystem initialized Aug 13 07:13:41.414783 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:13:41.426795 kernel: iscsi: registered transport (tcp) Aug 13 07:13:41.451790 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:13:41.451879 kernel: QLogic iSCSI HBA Driver Aug 13 07:13:41.504416 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:13:41.514980 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:13:41.544842 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:13:41.544930 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:13:41.544955 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:13:41.589777 kernel: raid6: avx2x4 gen() 18106 MB/s Aug 13 07:13:41.606773 kernel: raid6: avx2x2 gen() 18051 MB/s Aug 13 07:13:41.624142 kernel: raid6: avx2x1 gen() 14008 MB/s Aug 13 07:13:41.624183 kernel: raid6: using algorithm avx2x4 gen() 18106 MB/s Aug 13 07:13:41.642154 kernel: raid6: .... xor() 6739 MB/s, rmw enabled Aug 13 07:13:41.642213 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:13:41.665785 kernel: xor: automatically using best checksumming function avx Aug 13 07:13:41.839776 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:13:41.853167 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:13:41.859008 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:13:41.893823 systemd-udevd[400]: Using default interface naming scheme 'v255'. Aug 13 07:13:41.901375 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:13:41.914181 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:13:41.944240 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Aug 13 07:13:41.981899 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:13:41.988050 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:13:42.081482 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:13:42.090983 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:13:42.133274 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:13:42.145408 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:13:42.150276 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:13:42.161897 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:13:42.171332 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:13:42.204344 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:13:42.230017 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:13:42.252784 kernel: scsi host0: Virtio SCSI HBA Aug 13 07:13:42.279773 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Aug 13 07:13:42.279876 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:13:42.304761 kernel: AES CTR mode by8 optimization enabled Aug 13 07:13:42.320314 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:13:42.320524 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:13:42.334957 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:13:42.342822 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:13:42.343064 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:13:42.345927 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:13:42.357543 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:13:42.370439 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Aug 13 07:13:42.370797 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Aug 13 07:13:42.373922 kernel: sd 0:0:1:0: [sda] Write Protect is off Aug 13 07:13:42.374244 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Aug 13 07:13:42.374494 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 07:13:42.387459 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:13:42.387535 kernel: GPT:17805311 != 25165823 Aug 13 07:13:42.387561 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:13:42.387586 kernel: GPT:17805311 != 25165823 Aug 13 07:13:42.387612 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:13:42.387636 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:13:42.387660 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Aug 13 07:13:42.390144 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:13:42.404031 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:13:42.442010 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (472) Aug 13 07:13:42.446129 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (444) Aug 13 07:13:42.457003 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Aug 13 07:13:42.461217 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:13:42.496652 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Aug 13 07:13:42.503266 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 13 07:13:42.532408 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Aug 13 07:13:42.548884 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Aug 13 07:13:42.575989 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:13:42.622421 disk-uuid[549]: Primary Header is updated. Aug 13 07:13:42.622421 disk-uuid[549]: Secondary Entries is updated. Aug 13 07:13:42.622421 disk-uuid[549]: Secondary Header is updated. Aug 13 07:13:42.648909 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:13:42.669779 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:13:42.692760 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:13:43.689234 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:13:43.689313 disk-uuid[550]: The operation has completed successfully. Aug 13 07:13:43.768328 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:13:43.768481 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:13:43.793955 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:13:43.827392 sh[567]: Success Aug 13 07:13:43.850030 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:13:43.939670 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:13:43.965885 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:13:43.970093 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:13:44.036474 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:13:44.036567 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:13:44.036594 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:13:44.052752 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:13:44.052837 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:13:44.087770 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 07:13:44.094316 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:13:44.095329 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:13:44.104947 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:13:44.139850 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:13:44.173826 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:13:44.173912 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:13:44.173941 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:13:44.197786 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:13:44.197864 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:13:44.210156 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:13:44.227943 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:13:44.236136 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:13:44.253978 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:13:44.352610 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:13:44.364127 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:13:44.453566 systemd-networkd[750]: lo: Link UP Aug 13 07:13:44.453989 systemd-networkd[750]: lo: Gained carrier Aug 13 07:13:44.456667 systemd-networkd[750]: Enumeration completed Aug 13 07:13:44.456849 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:13:44.457573 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:13:44.478700 ignition[656]: Ignition 2.19.0 Aug 13 07:13:44.457581 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:13:44.478711 ignition[656]: Stage: fetch-offline Aug 13 07:13:44.460134 systemd-networkd[750]: eth0: Link UP Aug 13 07:13:44.478797 ignition[656]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:13:44.460141 systemd-networkd[750]: eth0: Gained carrier Aug 13 07:13:44.478822 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:13:44.460154 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:13:44.479009 ignition[656]: parsed url from cmdline: "" Aug 13 07:13:44.471835 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.36/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 13 07:13:44.479021 ignition[656]: no config URL provided Aug 13 07:13:44.494273 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:13:44.479032 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:13:44.523638 systemd[1]: Reached target network.target - Network. Aug 13 07:13:44.479052 ignition[656]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:13:44.544031 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:13:44.479070 ignition[656]: failed to fetch config: resource requires networking Aug 13 07:13:44.580209 unknown[758]: fetched base config from "system" Aug 13 07:13:44.479397 ignition[656]: Ignition finished successfully Aug 13 07:13:44.580222 unknown[758]: fetched base config from "system" Aug 13 07:13:44.569008 ignition[758]: Ignition 2.19.0 Aug 13 07:13:44.580233 unknown[758]: fetched user config from "gcp" Aug 13 07:13:44.569018 ignition[758]: Stage: fetch Aug 13 07:13:44.601353 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:13:44.569216 ignition[758]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:13:44.635996 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:13:44.569229 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:13:44.684822 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:13:44.569355 ignition[758]: parsed url from cmdline: "" Aug 13 07:13:44.707969 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:13:44.569363 ignition[758]: no config URL provided Aug 13 07:13:44.757832 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:13:44.569371 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:13:44.772262 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:13:44.569389 ignition[758]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:13:44.788907 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:13:44.569412 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Aug 13 07:13:44.806924 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:13:44.573177 ignition[758]: GET result: OK Aug 13 07:13:44.821912 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:13:44.573281 ignition[758]: parsing config with SHA512: 2e4bf7c169be4239068185235135e31af24bbcf487da30b70aaa254bcaaa547e0b6164b29d4c4dcff03db82ee49174ce4c0ec566431a0f8813b28649a18e7c46 Aug 13 07:13:44.838922 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:13:44.581542 ignition[758]: fetch: fetch complete Aug 13 07:13:44.862989 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:13:44.581553 ignition[758]: fetch: fetch passed Aug 13 07:13:44.581633 ignition[758]: Ignition finished successfully Aug 13 07:13:44.682257 ignition[764]: Ignition 2.19.0 Aug 13 07:13:44.682267 ignition[764]: Stage: kargs Aug 13 07:13:44.682461 ignition[764]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:13:44.682476 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:13:44.683503 ignition[764]: kargs: kargs passed Aug 13 07:13:44.683563 ignition[764]: Ignition finished successfully Aug 13 07:13:44.754538 ignition[771]: Ignition 2.19.0 Aug 13 07:13:44.754546 ignition[771]: Stage: disks Aug 13 07:13:44.754790 ignition[771]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:13:44.754809 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:13:44.756074 ignition[771]: disks: disks passed Aug 13 07:13:44.756131 ignition[771]: Ignition finished successfully Aug 13 07:13:44.908048 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 13 07:13:45.060492 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:13:45.065057 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:13:45.210780 kernel: EXT4-fs (sda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:13:45.211256 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:13:45.212158 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:13:45.234994 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:13:45.264881 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:13:45.308702 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (787) Aug 13 07:13:45.308772 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:13:45.308802 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:13:45.308845 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:13:45.265681 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:13:45.348927 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:13:45.348976 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:13:45.265777 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:13:45.265817 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:13:45.341651 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:13:45.368816 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:13:45.397989 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:13:45.541606 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:13:45.552786 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:13:45.562880 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:13:45.572889 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:13:45.711513 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:13:45.739919 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:13:45.767927 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:13:45.762959 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:13:45.786174 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:13:45.815057 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:13:45.820125 ignition[899]: INFO : Ignition 2.19.0 Aug 13 07:13:45.820125 ignition[899]: INFO : Stage: mount Aug 13 07:13:45.852900 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:13:45.852900 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:13:45.852900 ignition[899]: INFO : mount: mount passed Aug 13 07:13:45.852900 ignition[899]: INFO : Ignition finished successfully Aug 13 07:13:45.831420 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:13:45.844904 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:13:45.845220 systemd-networkd[750]: eth0: Gained IPv6LL Aug 13 07:13:46.217001 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:13:46.263767 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (911) Aug 13 07:13:46.281536 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:13:46.281629 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:13:46.281656 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:13:46.304101 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:13:46.304188 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:13:46.307541 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:13:46.346706 ignition[928]: INFO : Ignition 2.19.0 Aug 13 07:13:46.346706 ignition[928]: INFO : Stage: files Aug 13 07:13:46.360932 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:13:46.360932 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:13:46.360932 ignition[928]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:13:46.360932 ignition[928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:13:46.360932 ignition[928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:13:46.360932 ignition[928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:13:46.360932 ignition[928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:13:46.360932 ignition[928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:13:46.356979 unknown[928]: wrote ssh authorized keys file for user: core Aug 13 07:13:46.462925 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:13:46.462925 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 07:13:46.496909 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:13:46.855694 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:13:46.855694 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:13:46.887888 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:13:47.258992 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:13:47.652381 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:13:47.652381 ignition[928]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 07:13:47.691934 ignition[928]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:13:47.691934 ignition[928]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:13:47.691934 ignition[928]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 07:13:47.691934 ignition[928]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:13:47.691934 ignition[928]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:13:47.691934 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:13:47.691934 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:13:47.691934 ignition[928]: INFO : files: files passed Aug 13 07:13:47.691934 ignition[928]: INFO : Ignition finished successfully Aug 13 07:13:47.658327 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:13:47.687955 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:13:47.713973 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:13:47.768256 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:13:47.905930 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:13:47.905930 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:13:47.768396 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:13:47.955061 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:13:47.792280 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:13:47.818227 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:13:47.848990 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:13:47.933519 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:13:47.933645 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:13:47.945715 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:13:47.964951 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:13:47.986065 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:13:47.992950 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:13:48.057642 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:13:48.073002 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:13:48.118078 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:13:48.131138 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:13:48.153177 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:13:48.171170 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:13:48.171391 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:13:48.200181 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:13:48.219106 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:13:48.238183 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:13:48.258167 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:13:48.277093 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:13:48.298212 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:13:48.318131 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:13:48.339206 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:13:48.359227 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:13:48.380160 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:13:48.398113 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:13:48.398316 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:13:48.426212 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:13:48.445162 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:13:48.466017 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:13:48.466188 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:13:48.487149 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:13:48.487370 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:13:48.516175 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:13:48.516413 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:13:48.540210 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:13:48.604932 ignition[981]: INFO : Ignition 2.19.0 Aug 13 07:13:48.604932 ignition[981]: INFO : Stage: umount Aug 13 07:13:48.604932 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:13:48.604932 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:13:48.604932 ignition[981]: INFO : umount: umount passed Aug 13 07:13:48.604932 ignition[981]: INFO : Ignition finished successfully Aug 13 07:13:48.540415 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:13:48.565058 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:13:48.601161 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:13:48.613052 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:13:48.613294 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:13:48.662207 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:13:48.662504 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:13:48.693708 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:13:48.694825 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:13:48.694948 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:13:48.710636 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:13:48.710777 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:13:48.720261 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:13:48.720394 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:13:48.748219 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:13:48.748285 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:13:48.756188 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:13:48.756253 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:13:48.773149 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:13:48.773217 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:13:48.805106 systemd[1]: Stopped target network.target - Network. Aug 13 07:13:48.813101 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:13:48.813188 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:13:48.846137 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:13:48.861897 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:13:48.867846 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:13:48.881017 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:13:48.892125 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:13:48.922118 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:13:48.922194 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:13:48.930202 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:13:48.930270 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:13:48.947189 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:13:48.947268 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:13:48.964205 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:13:48.964287 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:13:48.981187 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:13:48.981271 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:13:48.998458 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:13:49.002849 systemd-networkd[750]: eth0: DHCPv6 lease lost Aug 13 07:13:49.026162 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:13:49.046412 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:13:49.046551 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:13:49.065447 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:13:49.065842 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:13:49.076075 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:13:49.076135 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:13:49.101928 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:13:49.128855 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:13:49.128975 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:13:49.147993 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:13:49.148096 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:13:49.165997 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:13:49.166091 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:13:49.183996 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:13:49.184114 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:13:49.205144 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:13:49.213589 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:13:49.213787 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:13:49.236612 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:13:49.236731 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:13:49.267973 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:13:49.653906 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 07:13:49.268061 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:13:49.287952 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:13:49.288059 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:13:49.317900 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:13:49.318055 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:13:49.343147 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:13:49.343241 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:13:49.378988 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:13:49.381083 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:13:49.381162 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:13:49.428133 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:13:49.428221 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:13:49.449086 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:13:49.449165 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:13:49.480134 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:13:49.480231 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:13:49.499636 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:13:49.499812 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:13:49.519346 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:13:49.519468 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:13:49.541255 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:13:49.567963 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:13:49.604038 systemd[1]: Switching root. Aug 13 07:13:49.882884 systemd-journald[183]: Journal stopped Aug 13 07:13:52.439208 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:13:52.439261 kernel: SELinux: policy capability open_perms=1 Aug 13 07:13:52.439283 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:13:52.439302 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:13:52.439319 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:13:52.439337 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:13:52.439359 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:13:52.439382 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:13:52.439402 kernel: audit: type=1403 audit(1755069230.294:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:13:52.439424 systemd[1]: Successfully loaded SELinux policy in 91.975ms. Aug 13 07:13:52.439447 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.411ms. Aug 13 07:13:52.439469 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:13:52.439490 systemd[1]: Detected virtualization google. Aug 13 07:13:52.439511 systemd[1]: Detected architecture x86-64. Aug 13 07:13:52.439537 systemd[1]: Detected first boot. Aug 13 07:13:52.439560 systemd[1]: Initializing machine ID from random generator. Aug 13 07:13:52.439582 zram_generator::config[1022]: No configuration found. Aug 13 07:13:52.439605 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:13:52.439629 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:13:52.439660 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:13:52.439682 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:13:52.439704 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:13:52.439725 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:13:52.439760 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:13:52.439783 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:13:52.439805 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:13:52.439832 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:13:52.439854 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:13:52.439875 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:13:52.439897 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:13:52.439919 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:13:52.439942 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:13:52.439963 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:13:52.439986 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:13:52.440013 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:13:52.440034 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:13:52.440056 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:13:52.440078 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:13:52.440099 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:13:52.440122 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:13:52.440151 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:13:52.440174 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:13:52.440198 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:13:52.440225 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:13:52.440247 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:13:52.440270 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:13:52.440292 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:13:52.440314 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:13:52.440337 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:13:52.440360 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:13:52.440388 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:13:52.440410 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:13:52.440433 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:13:52.440456 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:13:52.440479 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:13:52.440506 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:13:52.440529 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:13:52.440552 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:13:52.440576 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:13:52.440599 systemd[1]: Reached target machines.target - Containers. Aug 13 07:13:52.440624 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:13:52.440647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:13:52.440676 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:13:52.440703 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:13:52.440726 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:13:52.440761 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:13:52.440784 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:13:52.440807 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:13:52.440829 kernel: ACPI: bus type drm_connector registered Aug 13 07:13:52.440851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:13:52.440873 kernel: fuse: init (API version 7.39) Aug 13 07:13:52.440898 kernel: loop: module loaded Aug 13 07:13:52.440919 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:13:52.440942 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:13:52.440965 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:13:52.440988 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:13:52.441011 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:13:52.441034 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:13:52.441057 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:13:52.441108 systemd-journald[1110]: Collecting audit messages is disabled. Aug 13 07:13:52.441158 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:13:52.441184 systemd-journald[1110]: Journal started Aug 13 07:13:52.441231 systemd-journald[1110]: Runtime Journal (/run/log/journal/62f9109635aa44f785d86b806b8fcb35) is 8.0M, max 148.7M, 140.7M free. Aug 13 07:13:51.208281 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:13:51.234284 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 07:13:51.234909 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:13:52.471784 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:13:52.496851 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:13:52.513773 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:13:52.519778 systemd[1]: Stopped verity-setup.service. Aug 13 07:13:52.544882 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:13:52.555795 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:13:52.566389 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:13:52.576185 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:13:52.586208 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:13:52.596162 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:13:52.606137 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:13:52.616195 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:13:52.626312 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:13:52.638339 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:13:52.650382 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:13:52.650633 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:13:52.662361 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:13:52.662641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:13:52.674359 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:13:52.674608 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:13:52.685325 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:13:52.685580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:13:52.697343 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:13:52.697591 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:13:52.708305 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:13:52.708554 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:13:52.719297 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:13:52.729272 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:13:52.741324 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:13:52.753309 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:13:52.778425 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:13:52.793921 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:13:52.819836 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:13:52.829943 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:13:52.830025 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:13:52.841940 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:13:52.865049 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:13:52.880990 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:13:52.891105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:13:52.899201 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:13:52.916256 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:13:52.927939 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:13:52.944936 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:13:52.954928 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:13:52.965893 systemd-journald[1110]: Time spent on flushing to /var/log/journal/62f9109635aa44f785d86b806b8fcb35 is 84.296ms for 928 entries. Aug 13 07:13:52.965893 systemd-journald[1110]: System Journal (/var/log/journal/62f9109635aa44f785d86b806b8fcb35) is 8.0M, max 584.8M, 576.8M free. Aug 13 07:13:53.085922 systemd-journald[1110]: Received client request to flush runtime journal. Aug 13 07:13:53.086006 kernel: loop0: detected capacity change from 0 to 140768 Aug 13 07:13:52.966239 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:13:52.993078 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:13:53.011965 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:13:53.031010 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:13:53.046968 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:13:53.058081 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:13:53.070982 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:13:53.088441 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:13:53.100664 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:13:53.112336 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:13:53.138040 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:13:53.158912 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:13:53.166011 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. Aug 13 07:13:53.166047 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. Aug 13 07:13:53.171867 udevadm[1143]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:13:53.180945 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:13:53.192822 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:13:53.222357 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 07:13:53.221928 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:13:53.234648 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:13:53.237232 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:13:53.309953 kernel: loop2: detected capacity change from 0 to 142488 Aug 13 07:13:53.334918 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:13:53.356022 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:13:53.437172 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Aug 13 07:13:53.437206 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Aug 13 07:13:53.446999 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:13:53.454769 kernel: loop3: detected capacity change from 0 to 54824 Aug 13 07:13:53.529786 kernel: loop4: detected capacity change from 0 to 140768 Aug 13 07:13:53.585790 kernel: loop5: detected capacity change from 0 to 221472 Aug 13 07:13:53.624787 kernel: loop6: detected capacity change from 0 to 142488 Aug 13 07:13:53.680636 kernel: loop7: detected capacity change from 0 to 54824 Aug 13 07:13:53.709374 (sd-merge)[1168]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Aug 13 07:13:53.711782 (sd-merge)[1168]: Merged extensions into '/usr'. Aug 13 07:13:53.718666 systemd[1]: Reloading requested from client PID 1141 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:13:53.718915 systemd[1]: Reloading... Aug 13 07:13:53.890417 zram_generator::config[1190]: No configuration found. Aug 13 07:13:54.184218 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:13:54.187864 ldconfig[1136]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:13:54.289296 systemd[1]: Reloading finished in 569 ms. Aug 13 07:13:54.317674 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:13:54.327543 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:13:54.349035 systemd[1]: Starting ensure-sysext.service... Aug 13 07:13:54.366481 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:13:54.389585 systemd[1]: Reloading requested from client PID 1234 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:13:54.389787 systemd[1]: Reloading... Aug 13 07:13:54.412655 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:13:54.413301 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:13:54.415081 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:13:54.415678 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Aug 13 07:13:54.415832 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Aug 13 07:13:54.424691 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:13:54.428376 systemd-tmpfiles[1235]: Skipping /boot Aug 13 07:13:54.449423 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:13:54.449461 systemd-tmpfiles[1235]: Skipping /boot Aug 13 07:13:54.535769 zram_generator::config[1262]: No configuration found. Aug 13 07:13:54.664086 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:13:54.729709 systemd[1]: Reloading finished in 339 ms. Aug 13 07:13:54.751753 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:13:54.768493 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:13:54.794100 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:13:54.813030 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:13:54.833018 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:13:54.857286 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:13:54.878052 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:13:54.895285 augenrules[1324]: No rules Aug 13 07:13:54.897618 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:13:54.912529 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:13:54.947015 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:13:54.957710 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:13:54.958635 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Aug 13 07:13:54.979614 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:13:54.981233 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:13:54.991125 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:13:55.011132 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:13:55.030713 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:13:55.040993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:13:55.058560 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:13:55.067865 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:13:55.070429 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:13:55.082590 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:13:55.094823 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:13:55.106633 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:13:55.118857 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:13:55.119107 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:13:55.130609 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:13:55.131941 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:13:55.143526 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:13:55.144483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:13:55.155091 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:13:55.205892 systemd-resolved[1319]: Positive Trust Anchors: Aug 13 07:13:55.205929 systemd-resolved[1319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:13:55.205990 systemd-resolved[1319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:13:55.220188 systemd[1]: Finished ensure-sysext.service. Aug 13 07:13:55.227346 systemd-resolved[1319]: Defaulting to hostname 'linux'. Aug 13 07:13:55.237400 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:13:55.238291 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:13:55.244475 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:13:55.261960 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:13:55.280000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:13:55.299006 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:13:55.314644 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 13 07:13:55.324020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:13:55.336001 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:13:55.345918 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:13:55.355915 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:13:55.355972 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:13:55.356545 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:13:55.368521 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:13:55.368816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:13:55.380412 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:13:55.380708 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:13:55.392810 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Aug 13 07:13:55.401434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:13:55.401677 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:13:55.420764 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1353) Aug 13 07:13:55.424372 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:13:55.425843 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:13:55.451776 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 07:13:55.457165 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:13:55.477935 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Aug 13 07:13:55.492353 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 13 07:13:55.503778 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:13:55.526447 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Aug 13 07:13:55.525990 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:13:55.535763 kernel: ACPI: button: Sleep Button [SLPF] Aug 13 07:13:55.550081 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Aug 13 07:13:55.559936 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:13:55.560064 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:13:55.607782 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:13:55.609239 systemd-networkd[1378]: lo: Link UP Aug 13 07:13:55.609252 systemd-networkd[1378]: lo: Gained carrier Aug 13 07:13:55.614123 systemd-networkd[1378]: Enumeration completed Aug 13 07:13:55.614292 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:13:55.615044 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:13:55.615056 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:13:55.618070 systemd-networkd[1378]: eth0: Link UP Aug 13 07:13:55.618085 systemd-networkd[1378]: eth0: Gained carrier Aug 13 07:13:55.618114 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:13:55.624131 systemd[1]: Reached target network.target - Network. Aug 13 07:13:55.627566 systemd-networkd[1378]: eth0: DHCPv4 address 10.128.0.36/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 13 07:13:55.640085 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:13:55.676104 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Aug 13 07:13:55.698609 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 13 07:13:55.707855 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:13:55.734049 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:13:55.751141 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:13:55.761783 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:13:55.774544 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:13:55.796061 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:13:55.815296 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:13:55.856480 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:13:55.857676 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:13:55.866033 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:13:55.881815 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:13:55.886954 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:13:55.898243 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:13:55.908041 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:13:55.918967 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:13:55.930120 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:13:55.940047 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:13:55.950934 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:13:55.961902 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:13:55.961969 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:13:55.970916 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:13:55.982091 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:13:55.993665 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:13:56.014661 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:13:56.025899 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:13:56.037166 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:13:56.047801 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:13:56.057890 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:13:56.065974 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:13:56.066034 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:13:56.077929 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:13:56.093988 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:13:56.111986 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:13:56.133970 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:13:56.156988 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:13:56.165019 jq[1426]: false Aug 13 07:13:56.168886 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:13:56.177974 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:13:56.196992 systemd[1]: Started ntpd.service - Network Time Service. Aug 13 07:13:56.210891 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:13:56.214427 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:13:56.224977 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:13:56.229849 extend-filesystems[1429]: Found loop4 Aug 13 07:13:56.229849 extend-filesystems[1429]: Found loop5 Aug 13 07:13:56.229849 extend-filesystems[1429]: Found loop6 Aug 13 07:13:56.229849 extend-filesystems[1429]: Found loop7 Aug 13 07:13:56.229849 extend-filesystems[1429]: Found sda Aug 13 07:13:56.229849 extend-filesystems[1429]: Found sda1 Aug 13 07:13:56.229849 extend-filesystems[1429]: Found sda2 Aug 13 07:13:56.229849 extend-filesystems[1429]: Found sda3 Aug 13 07:13:56.229849 extend-filesystems[1429]: Found usr Aug 13 07:13:56.366780 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Aug 13 07:13:56.366835 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Aug 13 07:13:56.366873 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1355) Aug 13 07:13:56.367095 extend-filesystems[1429]: Found sda4 Aug 13 07:13:56.367095 extend-filesystems[1429]: Found sda6 Aug 13 07:13:56.367095 extend-filesystems[1429]: Found sda7 Aug 13 07:13:56.367095 extend-filesystems[1429]: Found sda9 Aug 13 07:13:56.367095 extend-filesystems[1429]: Checking size of /dev/sda9 Aug 13 07:13:56.367095 extend-filesystems[1429]: Resized partition /dev/sda9 Aug 13 07:13:56.303499 dbus-daemon[1425]: [system] SELinux support is enabled Aug 13 07:13:56.368063 coreos-metadata[1424]: Aug 13 07:13:56.237 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Aug 13 07:13:56.368063 coreos-metadata[1424]: Aug 13 07:13:56.239 INFO Fetch successful Aug 13 07:13:56.368063 coreos-metadata[1424]: Aug 13 07:13:56.239 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Aug 13 07:13:56.368063 coreos-metadata[1424]: Aug 13 07:13:56.239 INFO Fetch successful Aug 13 07:13:56.368063 coreos-metadata[1424]: Aug 13 07:13:56.239 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Aug 13 07:13:56.368063 coreos-metadata[1424]: Aug 13 07:13:56.240 INFO Fetch successful Aug 13 07:13:56.368063 coreos-metadata[1424]: Aug 13 07:13:56.241 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Aug 13 07:13:56.368063 coreos-metadata[1424]: Aug 13 07:13:56.241 INFO Fetch successful Aug 13 07:13:56.251017 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:13:56.369071 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:13:56.369071 extend-filesystems[1452]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 07:13:56.369071 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 2 Aug 13 07:13:56.369071 extend-filesystems[1452]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Aug 13 07:13:56.313007 dbus-daemon[1425]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1378 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 07:13:56.266566 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:10 UTC 2025 (1): Starting Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: ---------------------------------------------------- Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: ntp-4 is maintained by Network Time Foundation, Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: corporation. Support and training for ntp-4 are Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: available at https://www.nwtime.org/support Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: ---------------------------------------------------- Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: proto: precision = 0.087 usec (-23) Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: basedate set to 2025-07-31 Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: gps base set to 2025-08-03 (week 2378) Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: Listen normally on 3 eth0 10.128.0.36:123 Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: Listen normally on 4 lo [::1]:123 Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: bind(21) AF_INET6 fe80::4001:aff:fe80:24%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:24%2#123 Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: failed to init interface for address fe80::4001:aff:fe80:24%2 Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: Listening on routing socket on fd #21 for interface updates Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:13:56.467303 ntpd[1432]: 13 Aug 07:13:56 ntpd[1432]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:13:56.470330 extend-filesystems[1429]: Resized filesystem in /dev/sda9 Aug 13 07:13:56.351156 ntpd[1432]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:10 UTC 2025 (1): Starting Aug 13 07:13:56.267388 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:13:56.351195 ntpd[1432]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 07:13:56.277895 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:13:56.351213 ntpd[1432]: ---------------------------------------------------- Aug 13 07:13:56.304892 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:13:56.500165 update_engine[1447]: I20250813 07:13:56.447487 1447 main.cc:92] Flatcar Update Engine starting Aug 13 07:13:56.500165 update_engine[1447]: I20250813 07:13:56.451422 1447 update_check_scheduler.cc:74] Next update check in 4m31s Aug 13 07:13:56.351227 ntpd[1432]: ntp-4 is maintained by Network Time Foundation, Aug 13 07:13:56.344917 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:13:56.509404 jq[1453]: true Aug 13 07:13:56.351241 ntpd[1432]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 07:13:56.380377 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:13:56.351256 ntpd[1432]: corporation. Support and training for ntp-4 are Aug 13 07:13:56.380688 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:13:56.351270 ntpd[1432]: available at https://www.nwtime.org/support Aug 13 07:13:56.381265 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:13:56.351285 ntpd[1432]: ---------------------------------------------------- Aug 13 07:13:56.381822 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:13:56.354379 ntpd[1432]: proto: precision = 0.087 usec (-23) Aug 13 07:13:56.426552 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:13:56.355614 ntpd[1432]: basedate set to 2025-07-31 Aug 13 07:13:56.427908 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:13:56.355640 ntpd[1432]: gps base set to 2025-08-03 (week 2378) Aug 13 07:13:56.461362 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:13:56.365518 ntpd[1432]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 07:13:56.462556 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:13:56.365582 ntpd[1432]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 07:13:56.385305 ntpd[1432]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 07:13:56.385384 ntpd[1432]: Listen normally on 3 eth0 10.128.0.36:123 Aug 13 07:13:56.385444 ntpd[1432]: Listen normally on 4 lo [::1]:123 Aug 13 07:13:56.385526 ntpd[1432]: bind(21) AF_INET6 fe80::4001:aff:fe80:24%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 07:13:56.385558 ntpd[1432]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:24%2#123 Aug 13 07:13:56.385580 ntpd[1432]: failed to init interface for address fe80::4001:aff:fe80:24%2 Aug 13 07:13:56.385633 ntpd[1432]: Listening on routing socket on fd #21 for interface updates Aug 13 07:13:56.399155 ntpd[1432]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:13:56.399202 ntpd[1432]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:13:56.528365 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:13:56.536671 dbus-daemon[1425]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 07:13:56.596821 jq[1462]: true Aug 13 07:13:56.627259 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:13:56.638801 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:13:56.657678 tar[1461]: linux-amd64/helm Aug 13 07:13:56.660889 systemd-networkd[1378]: eth0: Gained IPv6LL Aug 13 07:13:56.669404 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:13:56.679336 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:13:56.681484 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:13:56.681543 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:13:56.701991 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 07:13:56.710121 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:13:56.710170 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:13:56.728423 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:13:56.738614 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:13:56.742128 systemd-logind[1440]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 07:13:56.742558 systemd-logind[1440]: Watching system buttons on /dev/input/event3 (Sleep Button) Aug 13 07:13:56.742595 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:13:56.744844 systemd-logind[1440]: New seat seat0. Aug 13 07:13:56.751080 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:13:56.770892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:13:56.787996 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:13:56.808951 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Aug 13 07:13:56.817083 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:13:56.859811 bash[1499]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:13:56.863272 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:13:56.887855 systemd[1]: Starting sshkeys.service... Aug 13 07:13:56.911814 init.sh[1498]: + '[' -e /etc/default/instance_configs.cfg.template ']' Aug 13 07:13:56.916140 init.sh[1498]: + echo -e '[InstanceSetup]\nset_host_keys = false' Aug 13 07:13:56.916140 init.sh[1498]: + /usr/bin/google_instance_setup Aug 13 07:13:56.923472 dbus-daemon[1425]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 07:13:56.925907 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 07:13:56.927010 dbus-daemon[1425]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1480 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 07:13:56.950305 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 07:13:57.005157 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:13:57.033039 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 07:13:57.056237 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 07:13:57.132898 polkitd[1508]: Started polkitd version 121 Aug 13 07:13:57.156938 polkitd[1508]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 07:13:57.157183 polkitd[1508]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 07:13:57.161517 polkitd[1508]: Finished loading, compiling and executing 2 rules Aug 13 07:13:57.164183 dbus-daemon[1425]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 07:13:57.164443 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 07:13:57.166900 polkitd[1508]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 07:13:57.226899 systemd-hostnamed[1480]: Hostname set to (transient) Aug 13 07:13:57.233369 systemd-resolved[1319]: System hostname changed to 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal'. Aug 13 07:13:57.235171 coreos-metadata[1513]: Aug 13 07:13:57.223 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Aug 13 07:13:57.235171 coreos-metadata[1513]: Aug 13 07:13:57.234 INFO Fetch failed with 404: resource not found Aug 13 07:13:57.235171 coreos-metadata[1513]: Aug 13 07:13:57.234 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Aug 13 07:13:57.235171 coreos-metadata[1513]: Aug 13 07:13:57.234 INFO Fetch successful Aug 13 07:13:57.235171 coreos-metadata[1513]: Aug 13 07:13:57.234 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Aug 13 07:13:57.235171 coreos-metadata[1513]: Aug 13 07:13:57.234 INFO Fetch failed with 404: resource not found Aug 13 07:13:57.235171 coreos-metadata[1513]: Aug 13 07:13:57.234 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Aug 13 07:13:57.235171 coreos-metadata[1513]: Aug 13 07:13:57.234 INFO Fetch failed with 404: resource not found Aug 13 07:13:57.235171 coreos-metadata[1513]: Aug 13 07:13:57.234 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Aug 13 07:13:57.235171 coreos-metadata[1513]: Aug 13 07:13:57.234 INFO Fetch successful Aug 13 07:13:57.239387 unknown[1513]: wrote ssh authorized keys file for user: core Aug 13 07:13:57.269174 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:13:57.331852 update-ssh-keys[1533]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:13:57.334228 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 07:13:57.361566 systemd[1]: Finished sshkeys.service. Aug 13 07:13:57.608485 containerd[1463]: time="2025-08-13T07:13:57.608289149Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:13:57.785363 containerd[1463]: time="2025-08-13T07:13:57.784787719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:13:57.793248 containerd[1463]: time="2025-08-13T07:13:57.793183588Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:13:57.793767 containerd[1463]: time="2025-08-13T07:13:57.793414374Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:13:57.793767 containerd[1463]: time="2025-08-13T07:13:57.793456744Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:13:57.795871 containerd[1463]: time="2025-08-13T07:13:57.794816410Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:13:57.795871 containerd[1463]: time="2025-08-13T07:13:57.794860077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:13:57.795871 containerd[1463]: time="2025-08-13T07:13:57.794974582Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:13:57.795871 containerd[1463]: time="2025-08-13T07:13:57.794995580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:13:57.795871 containerd[1463]: time="2025-08-13T07:13:57.795272775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:13:57.795871 containerd[1463]: time="2025-08-13T07:13:57.795296469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:13:57.795871 containerd[1463]: time="2025-08-13T07:13:57.795316968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:13:57.795871 containerd[1463]: time="2025-08-13T07:13:57.795334294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:13:57.795871 containerd[1463]: time="2025-08-13T07:13:57.795452305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:13:57.797000 containerd[1463]: time="2025-08-13T07:13:57.796931522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:13:57.798221 containerd[1463]: time="2025-08-13T07:13:57.797503350Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:13:57.798221 containerd[1463]: time="2025-08-13T07:13:57.797537788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:13:57.798221 containerd[1463]: time="2025-08-13T07:13:57.797687643Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:13:57.798704 containerd[1463]: time="2025-08-13T07:13:57.798673290Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:13:57.806317 containerd[1463]: time="2025-08-13T07:13:57.806070674Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:13:57.807204 containerd[1463]: time="2025-08-13T07:13:57.806547827Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:13:57.807204 containerd[1463]: time="2025-08-13T07:13:57.806593621Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:13:57.807204 containerd[1463]: time="2025-08-13T07:13:57.806628001Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:13:57.807204 containerd[1463]: time="2025-08-13T07:13:57.806652838Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:13:57.808061 containerd[1463]: time="2025-08-13T07:13:57.807511500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:13:57.808933 containerd[1463]: time="2025-08-13T07:13:57.808902775Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:13:57.811324 containerd[1463]: time="2025-08-13T07:13:57.809902818Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:13:57.811324 containerd[1463]: time="2025-08-13T07:13:57.809939982Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:13:57.811324 containerd[1463]: time="2025-08-13T07:13:57.809963710Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:13:57.811324 containerd[1463]: time="2025-08-13T07:13:57.809986588Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:13:57.811324 containerd[1463]: time="2025-08-13T07:13:57.810008959Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:13:57.811324 containerd[1463]: time="2025-08-13T07:13:57.810030248Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:13:57.811324 containerd[1463]: time="2025-08-13T07:13:57.810054843Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:13:57.811324 containerd[1463]: time="2025-08-13T07:13:57.810087470Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:13:57.811324 containerd[1463]: time="2025-08-13T07:13:57.810638959Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:13:57.811324 containerd[1463]: time="2025-08-13T07:13:57.810672785Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:13:57.811324 containerd[1463]: time="2025-08-13T07:13:57.810706907Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:13:57.812448 containerd[1463]: time="2025-08-13T07:13:57.812421582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.812576 containerd[1463]: time="2025-08-13T07:13:57.812558744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.812919 containerd[1463]: time="2025-08-13T07:13:57.812689578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.812919 containerd[1463]: time="2025-08-13T07:13:57.812727710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.812919 containerd[1463]: time="2025-08-13T07:13:57.812810271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.812919 containerd[1463]: time="2025-08-13T07:13:57.812888654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.813662 containerd[1463]: time="2025-08-13T07:13:57.813390889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.813662 containerd[1463]: time="2025-08-13T07:13:57.813430858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.813662 containerd[1463]: time="2025-08-13T07:13:57.813608858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.814709 containerd[1463]: time="2025-08-13T07:13:57.814305528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.814709 containerd[1463]: time="2025-08-13T07:13:57.814347514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.814709 containerd[1463]: time="2025-08-13T07:13:57.814426033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.814709 containerd[1463]: time="2025-08-13T07:13:57.814477637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.814709 containerd[1463]: time="2025-08-13T07:13:57.814506451Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:13:57.814709 containerd[1463]: time="2025-08-13T07:13:57.814567063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.814709 containerd[1463]: time="2025-08-13T07:13:57.814588855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.814709 containerd[1463]: time="2025-08-13T07:13:57.814640745Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:13:57.818005 containerd[1463]: time="2025-08-13T07:13:57.816681724Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:13:57.818005 containerd[1463]: time="2025-08-13T07:13:57.816793763Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:13:57.818005 containerd[1463]: time="2025-08-13T07:13:57.816819423Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:13:57.818005 containerd[1463]: time="2025-08-13T07:13:57.816862888Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:13:57.818005 containerd[1463]: time="2025-08-13T07:13:57.816889831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.818005 containerd[1463]: time="2025-08-13T07:13:57.816913283Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:13:57.818005 containerd[1463]: time="2025-08-13T07:13:57.816952162Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:13:57.818005 containerd[1463]: time="2025-08-13T07:13:57.816970475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:13:57.818406 containerd[1463]: time="2025-08-13T07:13:57.817702687Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:13:57.818406 containerd[1463]: time="2025-08-13T07:13:57.817863713Z" level=info msg="Connect containerd service" Aug 13 07:13:57.818406 containerd[1463]: time="2025-08-13T07:13:57.817942713Z" level=info msg="using legacy CRI server" Aug 13 07:13:57.818406 containerd[1463]: time="2025-08-13T07:13:57.817965242Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:13:57.822454 containerd[1463]: time="2025-08-13T07:13:57.820533632Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:13:57.823457 containerd[1463]: time="2025-08-13T07:13:57.823199242Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:13:57.824348 containerd[1463]: time="2025-08-13T07:13:57.823990966Z" level=info msg="Start subscribing containerd event" Aug 13 07:13:57.824348 containerd[1463]: time="2025-08-13T07:13:57.824073746Z" level=info msg="Start recovering state" Aug 13 07:13:57.824348 containerd[1463]: time="2025-08-13T07:13:57.824182236Z" level=info msg="Start event monitor" Aug 13 07:13:57.824348 containerd[1463]: time="2025-08-13T07:13:57.824204271Z" level=info msg="Start snapshots syncer" Aug 13 07:13:57.824348 containerd[1463]: time="2025-08-13T07:13:57.824218504Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:13:57.824348 containerd[1463]: time="2025-08-13T07:13:57.824230239Z" level=info msg="Start streaming server" Aug 13 07:13:57.833362 containerd[1463]: time="2025-08-13T07:13:57.833107267Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:13:57.836760 containerd[1463]: time="2025-08-13T07:13:57.835232871Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:13:57.836760 containerd[1463]: time="2025-08-13T07:13:57.836609143Z" level=info msg="containerd successfully booted in 0.232559s" Aug 13 07:13:57.836446 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:13:58.351545 tar[1461]: linux-amd64/LICENSE Aug 13 07:13:58.351545 tar[1461]: linux-amd64/README.md Aug 13 07:13:58.383837 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:13:58.395466 instance-setup[1506]: INFO Running google_set_multiqueue. Aug 13 07:13:58.427969 instance-setup[1506]: INFO Set channels for eth0 to 2. Aug 13 07:13:58.437042 instance-setup[1506]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Aug 13 07:13:58.442707 instance-setup[1506]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Aug 13 07:13:58.442981 instance-setup[1506]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Aug 13 07:13:58.446950 instance-setup[1506]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Aug 13 07:13:58.447187 instance-setup[1506]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Aug 13 07:13:58.451824 instance-setup[1506]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Aug 13 07:13:58.452139 instance-setup[1506]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Aug 13 07:13:58.455093 instance-setup[1506]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Aug 13 07:13:58.468364 instance-setup[1506]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Aug 13 07:13:58.473346 instance-setup[1506]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Aug 13 07:13:58.475215 instance-setup[1506]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Aug 13 07:13:58.475834 instance-setup[1506]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Aug 13 07:13:58.499352 init.sh[1498]: + /usr/bin/google_metadata_script_runner --script-type startup Aug 13 07:13:58.539887 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:13:58.607270 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:13:58.626883 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:13:58.648168 systemd[1]: Started sshd@0-10.128.0.36:22-139.178.68.195:57916.service - OpenSSH per-connection server daemon (139.178.68.195:57916). Aug 13 07:13:58.661705 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:13:58.662022 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:13:58.684604 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:13:58.735864 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:13:58.735855 startup-script[1573]: INFO Starting startup scripts. Aug 13 07:13:58.742418 startup-script[1573]: INFO No startup scripts found in metadata. Aug 13 07:13:58.742506 startup-script[1573]: INFO Finished running startup scripts. Aug 13 07:13:58.759891 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:13:58.777771 init.sh[1498]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Aug 13 07:13:58.777771 init.sh[1498]: + daemon_pids=() Aug 13 07:13:58.777771 init.sh[1498]: + for d in accounts clock_skew network Aug 13 07:13:58.777771 init.sh[1498]: + daemon_pids+=($!) Aug 13 07:13:58.777771 init.sh[1498]: + for d in accounts clock_skew network Aug 13 07:13:58.777771 init.sh[1498]: + daemon_pids+=($!) Aug 13 07:13:58.777771 init.sh[1498]: + for d in accounts clock_skew network Aug 13 07:13:58.777771 init.sh[1498]: + daemon_pids+=($!) Aug 13 07:13:58.777771 init.sh[1498]: + NOTIFY_SOCKET=/run/systemd/notify Aug 13 07:13:58.777771 init.sh[1498]: + /usr/bin/systemd-notify --ready Aug 13 07:13:58.779377 init.sh[1594]: + /usr/bin/google_accounts_daemon Aug 13 07:13:58.780096 init.sh[1595]: + /usr/bin/google_clock_skew_daemon Aug 13 07:13:58.780377 init.sh[1596]: + /usr/bin/google_network_daemon Aug 13 07:13:58.780566 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:13:58.791247 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:13:58.800532 systemd[1]: Started oem-gce.service - GCE Linux Agent. Aug 13 07:13:58.817353 init.sh[1498]: + wait -n 1594 1595 1596 Aug 13 07:13:59.056925 sshd[1585]: Accepted publickey for core from 139.178.68.195 port 57916 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:13:59.059871 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:59.087310 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:13:59.107270 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:13:59.127308 systemd-logind[1440]: New session 1 of user core. Aug 13 07:13:59.158434 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:13:59.181598 google-networking[1596]: INFO Starting Google Networking daemon. Aug 13 07:13:59.185243 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:13:59.219534 google-clock-skew[1595]: INFO Starting Google Clock Skew daemon. Aug 13 07:13:59.224877 (systemd)[1606]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:13:59.229880 google-clock-skew[1595]: INFO Clock drift token has changed: 0. Aug 13 07:13:59.329185 groupadd[1614]: group added to /etc/group: name=google-sudoers, GID=1000 Aug 13 07:13:59.335655 groupadd[1614]: group added to /etc/gshadow: name=google-sudoers Aug 13 07:13:59.351783 ntpd[1432]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:24%2]:123 Aug 13 07:13:59.352580 ntpd[1432]: 13 Aug 07:13:59 ntpd[1432]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:24%2]:123 Aug 13 07:13:59.424265 groupadd[1614]: new group: name=google-sudoers, GID=1000 Aug 13 07:13:59.452592 systemd[1606]: Queued start job for default target default.target. Aug 13 07:13:59.459480 systemd[1606]: Created slice app.slice - User Application Slice. Aug 13 07:13:59.459531 systemd[1606]: Reached target paths.target - Paths. Aug 13 07:13:59.459557 systemd[1606]: Reached target timers.target - Timers. Aug 13 07:13:59.463374 systemd[1606]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:13:59.473144 google-accounts[1594]: INFO Starting Google Accounts daemon. Aug 13 07:13:59.489402 systemd[1606]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:13:59.489604 systemd[1606]: Reached target sockets.target - Sockets. Aug 13 07:13:59.489631 systemd[1606]: Reached target basic.target - Basic System. Aug 13 07:13:59.489700 systemd[1606]: Reached target default.target - Main User Target. Aug 13 07:13:59.489781 systemd[1606]: Startup finished in 246ms. Aug 13 07:13:59.492529 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:13:59.493943 google-accounts[1594]: WARNING OS Login not installed. Aug 13 07:13:59.495803 google-accounts[1594]: INFO Creating a new user account for 0. Aug 13 07:13:59.507583 init.sh[1624]: useradd: invalid user name '0': use --badname to ignore Aug 13 07:13:59.506713 google-accounts[1594]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Aug 13 07:13:59.512491 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:13:59.725018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:13:59.745568 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:13:59.752353 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:13:59.777259 systemd[1]: Started sshd@1-10.128.0.36:22-139.178.68.195:57928.service - OpenSSH per-connection server daemon (139.178.68.195:57928). Aug 13 07:13:59.787843 systemd[1]: Startup finished in 1.056s (kernel) + 9.514s (initrd) + 9.574s (userspace) = 20.145s. Aug 13 07:14:00.000297 systemd-resolved[1319]: Clock change detected. Flushing caches. Aug 13 07:14:00.000628 google-clock-skew[1595]: INFO Synced system time with hardware clock. Aug 13 07:14:00.230951 sshd[1635]: Accepted publickey for core from 139.178.68.195 port 57928 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:14:00.233030 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:00.241539 systemd-logind[1440]: New session 2 of user core. Aug 13 07:14:00.246769 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:14:00.447786 sshd[1635]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:00.454442 systemd[1]: sshd@1-10.128.0.36:22-139.178.68.195:57928.service: Deactivated successfully. Aug 13 07:14:00.457611 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:14:00.458723 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:14:00.460408 systemd-logind[1440]: Removed session 2. Aug 13 07:14:00.503677 systemd[1]: Started sshd@2-10.128.0.36:22-139.178.68.195:36282.service - OpenSSH per-connection server daemon (139.178.68.195:36282). Aug 13 07:14:00.793980 sshd[1650]: Accepted publickey for core from 139.178.68.195 port 36282 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:14:00.796766 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:00.804505 systemd-logind[1440]: New session 3 of user core. Aug 13 07:14:00.813748 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:14:00.844145 kubelet[1633]: E0813 07:14:00.844021 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:14:00.846999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:14:00.847257 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:14:00.847726 systemd[1]: kubelet.service: Consumed 1.322s CPU time. Aug 13 07:14:01.003399 sshd[1650]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:01.007927 systemd[1]: sshd@2-10.128.0.36:22-139.178.68.195:36282.service: Deactivated successfully. Aug 13 07:14:01.010300 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:14:01.012037 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:14:01.013820 systemd-logind[1440]: Removed session 3. Aug 13 07:14:01.058864 systemd[1]: Started sshd@3-10.128.0.36:22-139.178.68.195:36298.service - OpenSSH per-connection server daemon (139.178.68.195:36298). Aug 13 07:14:01.345348 sshd[1660]: Accepted publickey for core from 139.178.68.195 port 36298 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:14:01.347213 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:01.353573 systemd-logind[1440]: New session 4 of user core. Aug 13 07:14:01.361748 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:14:01.559435 sshd[1660]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:01.563848 systemd[1]: sshd@3-10.128.0.36:22-139.178.68.195:36298.service: Deactivated successfully. Aug 13 07:14:01.566278 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:14:01.568280 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:14:01.569874 systemd-logind[1440]: Removed session 4. Aug 13 07:14:01.615928 systemd[1]: Started sshd@4-10.128.0.36:22-139.178.68.195:36308.service - OpenSSH per-connection server daemon (139.178.68.195:36308). Aug 13 07:14:01.905306 sshd[1667]: Accepted publickey for core from 139.178.68.195 port 36308 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:14:01.907229 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:01.913615 systemd-logind[1440]: New session 5 of user core. Aug 13 07:14:01.921785 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:14:02.098329 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:14:02.098868 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:14:02.114403 sudo[1670]: pam_unix(sudo:session): session closed for user root Aug 13 07:14:02.157810 sshd[1667]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:02.163807 systemd[1]: sshd@4-10.128.0.36:22-139.178.68.195:36308.service: Deactivated successfully. Aug 13 07:14:02.166003 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:14:02.166981 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:14:02.168493 systemd-logind[1440]: Removed session 5. Aug 13 07:14:02.217865 systemd[1]: Started sshd@5-10.128.0.36:22-139.178.68.195:36318.service - OpenSSH per-connection server daemon (139.178.68.195:36318). Aug 13 07:14:02.494301 sshd[1675]: Accepted publickey for core from 139.178.68.195 port 36318 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:14:02.496289 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:02.502442 systemd-logind[1440]: New session 6 of user core. Aug 13 07:14:02.509728 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:14:02.672066 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:14:02.672632 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:14:02.677643 sudo[1679]: pam_unix(sudo:session): session closed for user root Aug 13 07:14:02.691016 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:14:02.691517 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:14:02.713941 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:14:02.716441 auditctl[1682]: No rules Aug 13 07:14:02.716992 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:14:02.717262 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:14:02.724122 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:14:02.764217 augenrules[1700]: No rules Aug 13 07:14:02.766591 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:14:02.768224 sudo[1678]: pam_unix(sudo:session): session closed for user root Aug 13 07:14:02.811013 sshd[1675]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:02.816669 systemd[1]: sshd@5-10.128.0.36:22-139.178.68.195:36318.service: Deactivated successfully. Aug 13 07:14:02.818938 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:14:02.820004 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:14:02.821404 systemd-logind[1440]: Removed session 6. Aug 13 07:14:02.866892 systemd[1]: Started sshd@6-10.128.0.36:22-139.178.68.195:36334.service - OpenSSH per-connection server daemon (139.178.68.195:36334). Aug 13 07:14:03.156950 sshd[1708]: Accepted publickey for core from 139.178.68.195 port 36334 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:14:03.158947 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:03.165314 systemd-logind[1440]: New session 7 of user core. Aug 13 07:14:03.171710 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:14:03.335690 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:14:03.336181 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:14:03.777929 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:14:03.789228 (dockerd)[1727]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:14:04.241860 dockerd[1727]: time="2025-08-13T07:14:04.241686449Z" level=info msg="Starting up" Aug 13 07:14:04.495884 dockerd[1727]: time="2025-08-13T07:14:04.495428237Z" level=info msg="Loading containers: start." Aug 13 07:14:04.648577 kernel: Initializing XFRM netlink socket Aug 13 07:14:04.766308 systemd-networkd[1378]: docker0: Link UP Aug 13 07:14:04.788450 dockerd[1727]: time="2025-08-13T07:14:04.788384614Z" level=info msg="Loading containers: done." Aug 13 07:14:04.810843 dockerd[1727]: time="2025-08-13T07:14:04.810769959Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:14:04.811077 dockerd[1727]: time="2025-08-13T07:14:04.810915357Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:14:04.811143 dockerd[1727]: time="2025-08-13T07:14:04.811082648Z" level=info msg="Daemon has completed initialization" Aug 13 07:14:04.851030 dockerd[1727]: time="2025-08-13T07:14:04.850862779Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:14:04.851506 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:14:05.738601 containerd[1463]: time="2025-08-13T07:14:05.738543698Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 07:14:06.357391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3099907264.mount: Deactivated successfully. Aug 13 07:14:08.150939 containerd[1463]: time="2025-08-13T07:14:08.150831880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:08.152669 containerd[1463]: time="2025-08-13T07:14:08.152594827Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28084387" Aug 13 07:14:08.153956 containerd[1463]: time="2025-08-13T07:14:08.153829009Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:08.157575 containerd[1463]: time="2025-08-13T07:14:08.157511952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:08.159250 containerd[1463]: time="2025-08-13T07:14:08.158980028Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 2.42038231s" Aug 13 07:14:08.159250 containerd[1463]: time="2025-08-13T07:14:08.159034026Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 07:14:08.160163 containerd[1463]: time="2025-08-13T07:14:08.160116001Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 07:14:09.633418 containerd[1463]: time="2025-08-13T07:14:09.633345590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:09.635129 containerd[1463]: time="2025-08-13T07:14:09.635027946Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24715179" Aug 13 07:14:09.636387 containerd[1463]: time="2025-08-13T07:14:09.636302401Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:09.641278 containerd[1463]: time="2025-08-13T07:14:09.641187771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:09.643510 containerd[1463]: time="2025-08-13T07:14:09.643223130Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.482929123s" Aug 13 07:14:09.643510 containerd[1463]: time="2025-08-13T07:14:09.643278049Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 07:14:09.644613 containerd[1463]: time="2025-08-13T07:14:09.644395282Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 07:14:10.798254 containerd[1463]: time="2025-08-13T07:14:10.798188422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:10.799920 containerd[1463]: time="2025-08-13T07:14:10.799834688Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18785616" Aug 13 07:14:10.801130 containerd[1463]: time="2025-08-13T07:14:10.801039396Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:10.805525 containerd[1463]: time="2025-08-13T07:14:10.805440217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:10.807575 containerd[1463]: time="2025-08-13T07:14:10.807391747Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.162951931s" Aug 13 07:14:10.807575 containerd[1463]: time="2025-08-13T07:14:10.807443137Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 07:14:10.808322 containerd[1463]: time="2025-08-13T07:14:10.808294760Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 07:14:11.097587 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:14:11.107800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:14:11.380178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:14:11.386656 (kubelet)[1933]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:14:11.442018 kubelet[1933]: E0813 07:14:11.441930 1933 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:14:11.446215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:14:11.446485 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:14:12.223807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3964631817.mount: Deactivated successfully. Aug 13 07:14:12.854367 containerd[1463]: time="2025-08-13T07:14:12.854289933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:12.855737 containerd[1463]: time="2025-08-13T07:14:12.855668373Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30385507" Aug 13 07:14:12.857272 containerd[1463]: time="2025-08-13T07:14:12.857201569Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:12.860088 containerd[1463]: time="2025-08-13T07:14:12.860047841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:12.861399 containerd[1463]: time="2025-08-13T07:14:12.861049188Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 2.052664322s" Aug 13 07:14:12.861399 containerd[1463]: time="2025-08-13T07:14:12.861108192Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 07:14:12.862278 containerd[1463]: time="2025-08-13T07:14:12.861962934Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:14:13.326359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3690755953.mount: Deactivated successfully. Aug 13 07:14:14.714493 containerd[1463]: time="2025-08-13T07:14:14.714417002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:14.716193 containerd[1463]: time="2025-08-13T07:14:14.716125801Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Aug 13 07:14:14.717302 containerd[1463]: time="2025-08-13T07:14:14.717224677Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:14.720962 containerd[1463]: time="2025-08-13T07:14:14.720897126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:14.722596 containerd[1463]: time="2025-08-13T07:14:14.722388171Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.860381106s" Aug 13 07:14:14.722596 containerd[1463]: time="2025-08-13T07:14:14.722441368Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:14:14.723661 containerd[1463]: time="2025-08-13T07:14:14.723398063Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:14:15.330598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount87668647.mount: Deactivated successfully. Aug 13 07:14:15.337727 containerd[1463]: time="2025-08-13T07:14:15.337658873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:15.338963 containerd[1463]: time="2025-08-13T07:14:15.338894019Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Aug 13 07:14:15.340302 containerd[1463]: time="2025-08-13T07:14:15.340240300Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:15.343361 containerd[1463]: time="2025-08-13T07:14:15.343322617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:15.344585 containerd[1463]: time="2025-08-13T07:14:15.344399883Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 620.960324ms" Aug 13 07:14:15.344585 containerd[1463]: time="2025-08-13T07:14:15.344446928Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:14:15.345547 containerd[1463]: time="2025-08-13T07:14:15.345288214Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 07:14:15.761422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4280260632.mount: Deactivated successfully. Aug 13 07:14:17.878568 containerd[1463]: time="2025-08-13T07:14:17.878486202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:17.880415 containerd[1463]: time="2025-08-13T07:14:17.880348845Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56786577" Aug 13 07:14:17.881439 containerd[1463]: time="2025-08-13T07:14:17.881367287Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:17.885235 containerd[1463]: time="2025-08-13T07:14:17.885197195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:17.887155 containerd[1463]: time="2025-08-13T07:14:17.886938100Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.54159936s" Aug 13 07:14:17.887155 containerd[1463]: time="2025-08-13T07:14:17.886986415Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 07:14:21.473921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:14:21.485629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:14:22.356508 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:14:22.356644 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:14:22.356988 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:14:22.373799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:14:22.408509 systemd[1]: Reloading requested from client PID 2090 ('systemctl') (unit session-7.scope)... Aug 13 07:14:22.408532 systemd[1]: Reloading... Aug 13 07:14:22.529792 zram_generator::config[2128]: No configuration found. Aug 13 07:14:22.706627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:14:22.819923 systemd[1]: Reloading finished in 410 ms. Aug 13 07:14:22.937617 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:14:22.937781 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:14:22.940112 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:14:22.948924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:14:23.547203 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:14:23.562246 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:14:23.614632 kubelet[2179]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:14:23.614632 kubelet[2179]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:14:23.614632 kubelet[2179]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:14:23.615215 kubelet[2179]: I0813 07:14:23.614717 2179 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:14:24.309709 kubelet[2179]: I0813 07:14:24.309648 2179 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:14:24.310215 kubelet[2179]: I0813 07:14:24.309686 2179 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:14:24.311656 kubelet[2179]: I0813 07:14:24.311617 2179 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:14:24.348654 kubelet[2179]: E0813 07:14:24.348599 2179 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:14:24.350117 kubelet[2179]: I0813 07:14:24.349902 2179 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:14:24.359035 kubelet[2179]: E0813 07:14:24.358989 2179 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:14:24.359035 kubelet[2179]: I0813 07:14:24.359032 2179 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:14:24.364559 kubelet[2179]: I0813 07:14:24.364527 2179 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:14:24.365753 kubelet[2179]: I0813 07:14:24.365708 2179 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:14:24.365999 kubelet[2179]: I0813 07:14:24.365952 2179 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:14:24.366236 kubelet[2179]: I0813 07:14:24.365990 2179 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:14:24.366439 kubelet[2179]: I0813 07:14:24.366237 2179 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:14:24.366439 kubelet[2179]: I0813 07:14:24.366254 2179 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:14:24.366439 kubelet[2179]: I0813 07:14:24.366415 2179 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:14:24.370684 kubelet[2179]: I0813 07:14:24.370644 2179 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:14:24.370684 kubelet[2179]: I0813 07:14:24.370681 2179 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:14:24.371013 kubelet[2179]: I0813 07:14:24.370733 2179 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:14:24.371013 kubelet[2179]: I0813 07:14:24.370760 2179 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:14:24.378829 kubelet[2179]: W0813 07:14:24.378230 2179 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.36:6443: connect: connection refused Aug 13 07:14:24.378829 kubelet[2179]: E0813 07:14:24.378328 2179 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:14:24.379601 kubelet[2179]: W0813 07:14:24.379528 2179 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.36:6443: connect: connection refused Aug 13 07:14:24.379708 kubelet[2179]: E0813 07:14:24.379613 2179 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:14:24.379772 kubelet[2179]: I0813 07:14:24.379742 2179 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:14:24.380806 kubelet[2179]: I0813 07:14:24.380410 2179 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:14:24.380806 kubelet[2179]: W0813 07:14:24.380536 2179 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:14:24.389626 kubelet[2179]: I0813 07:14:24.388818 2179 server.go:1274] "Started kubelet" Aug 13 07:14:24.389626 kubelet[2179]: I0813 07:14:24.388914 2179 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:14:24.389626 kubelet[2179]: I0813 07:14:24.389391 2179 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:14:24.390680 kubelet[2179]: I0813 07:14:24.390639 2179 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:14:24.401496 kubelet[2179]: I0813 07:14:24.399983 2179 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:14:24.401496 kubelet[2179]: E0813 07:14:24.398439 2179 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal.185b42327c15c9af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal,UID:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal,},FirstTimestamp:2025-08-13 07:14:24.388540847 +0000 UTC m=+0.820991017,LastTimestamp:2025-08-13 07:14:24.388540847 +0000 UTC m=+0.820991017,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal,}" Aug 13 07:14:24.401496 kubelet[2179]: I0813 07:14:24.401132 2179 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:14:24.402359 kubelet[2179]: I0813 07:14:24.402334 2179 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:14:24.406102 kubelet[2179]: E0813 07:14:24.405287 2179 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" not found" Aug 13 07:14:24.406102 kubelet[2179]: I0813 07:14:24.405352 2179 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:14:24.406102 kubelet[2179]: I0813 07:14:24.405608 2179 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:14:24.406102 kubelet[2179]: I0813 07:14:24.405673 2179 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:14:24.406352 kubelet[2179]: W0813 07:14:24.406096 2179 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.36:6443: connect: connection refused Aug 13 07:14:24.406352 kubelet[2179]: E0813 07:14:24.406158 2179 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:14:24.406906 kubelet[2179]: E0813 07:14:24.406449 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.36:6443: connect: connection refused" interval="200ms" Aug 13 07:14:24.407503 kubelet[2179]: I0813 07:14:24.407450 2179 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:14:24.409720 kubelet[2179]: E0813 07:14:24.409689 2179 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:14:24.409896 kubelet[2179]: I0813 07:14:24.409871 2179 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:14:24.409896 kubelet[2179]: I0813 07:14:24.409896 2179 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:14:24.424401 kubelet[2179]: I0813 07:14:24.424353 2179 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:14:24.426575 kubelet[2179]: I0813 07:14:24.426323 2179 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:14:24.426575 kubelet[2179]: I0813 07:14:24.426352 2179 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:14:24.426575 kubelet[2179]: I0813 07:14:24.426381 2179 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:14:24.426575 kubelet[2179]: E0813 07:14:24.426438 2179 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:14:24.440843 kubelet[2179]: W0813 07:14:24.440727 2179 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.36:6443: connect: connection refused Aug 13 07:14:24.440843 kubelet[2179]: E0813 07:14:24.440805 2179 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:14:24.448127 kubelet[2179]: I0813 07:14:24.448098 2179 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:14:24.448287 kubelet[2179]: I0813 07:14:24.448124 2179 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:14:24.448287 kubelet[2179]: I0813 07:14:24.448171 2179 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:14:24.451002 kubelet[2179]: I0813 07:14:24.450955 2179 policy_none.go:49] "None policy: Start" Aug 13 07:14:24.451704 kubelet[2179]: I0813 07:14:24.451679 2179 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:14:24.451807 kubelet[2179]: I0813 07:14:24.451713 2179 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:14:24.464000 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:14:24.475577 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:14:24.480010 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:14:24.488481 kubelet[2179]: I0813 07:14:24.488421 2179 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:14:24.488703 kubelet[2179]: I0813 07:14:24.488670 2179 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:14:24.488771 kubelet[2179]: I0813 07:14:24.488691 2179 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:14:24.489622 kubelet[2179]: I0813 07:14:24.489376 2179 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:14:24.492403 kubelet[2179]: E0813 07:14:24.492365 2179 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" not found" Aug 13 07:14:24.548079 systemd[1]: Created slice kubepods-burstable-pod08b9e46922e4af6430f025c46f32d2ee.slice - libcontainer container kubepods-burstable-pod08b9e46922e4af6430f025c46f32d2ee.slice. Aug 13 07:14:24.566161 systemd[1]: Created slice kubepods-burstable-pod99fdb9b39e1ccf2325cb7e5a30458a35.slice - libcontainer container kubepods-burstable-pod99fdb9b39e1ccf2325cb7e5a30458a35.slice. Aug 13 07:14:24.573902 systemd[1]: Created slice kubepods-burstable-pod2fc01ff5dcb1422347fa25e055ccc6bd.slice - libcontainer container kubepods-burstable-pod2fc01ff5dcb1422347fa25e055ccc6bd.slice. Aug 13 07:14:24.593508 kubelet[2179]: I0813 07:14:24.593456 2179 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:24.593874 kubelet[2179]: E0813 07:14:24.593838 2179 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.36:6443/api/v1/nodes\": dial tcp 10.128.0.36:6443: connect: connection refused" node="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:24.608079 kubelet[2179]: E0813 07:14:24.608010 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.36:6443: connect: connection refused" interval="400ms" Aug 13 07:14:24.707682 kubelet[2179]: I0813 07:14:24.707602 2179 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99fdb9b39e1ccf2325cb7e5a30458a35-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"99fdb9b39e1ccf2325cb7e5a30458a35\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:24.707682 kubelet[2179]: I0813 07:14:24.707672 2179 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99fdb9b39e1ccf2325cb7e5a30458a35-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"99fdb9b39e1ccf2325cb7e5a30458a35\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:24.708284 kubelet[2179]: I0813 07:14:24.707704 2179 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99fdb9b39e1ccf2325cb7e5a30458a35-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"99fdb9b39e1ccf2325cb7e5a30458a35\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:24.708284 kubelet[2179]: I0813 07:14:24.707737 2179 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fc01ff5dcb1422347fa25e055ccc6bd-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"2fc01ff5dcb1422347fa25e055ccc6bd\") " pod="kube-system/kube-scheduler-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:24.708284 kubelet[2179]: I0813 07:14:24.707763 2179 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08b9e46922e4af6430f025c46f32d2ee-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"08b9e46922e4af6430f025c46f32d2ee\") " pod="kube-system/kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:24.708284 kubelet[2179]: I0813 07:14:24.707791 2179 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99fdb9b39e1ccf2325cb7e5a30458a35-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"99fdb9b39e1ccf2325cb7e5a30458a35\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:24.708412 kubelet[2179]: I0813 07:14:24.707819 2179 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/99fdb9b39e1ccf2325cb7e5a30458a35-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"99fdb9b39e1ccf2325cb7e5a30458a35\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:24.708412 kubelet[2179]: I0813 07:14:24.707849 2179 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08b9e46922e4af6430f025c46f32d2ee-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"08b9e46922e4af6430f025c46f32d2ee\") " pod="kube-system/kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:24.708412 kubelet[2179]: I0813 07:14:24.707886 2179 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08b9e46922e4af6430f025c46f32d2ee-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"08b9e46922e4af6430f025c46f32d2ee\") " pod="kube-system/kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:24.800673 kubelet[2179]: I0813 07:14:24.800625 2179 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:24.801083 kubelet[2179]: E0813 07:14:24.801032 2179 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.36:6443/api/v1/nodes\": dial tcp 10.128.0.36:6443: connect: connection refused" node="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:24.861874 containerd[1463]: time="2025-08-13T07:14:24.861718146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal,Uid:08b9e46922e4af6430f025c46f32d2ee,Namespace:kube-system,Attempt:0,}" Aug 13 07:14:24.871722 containerd[1463]: time="2025-08-13T07:14:24.871624269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal,Uid:99fdb9b39e1ccf2325cb7e5a30458a35,Namespace:kube-system,Attempt:0,}" Aug 13 07:14:24.885573 containerd[1463]: time="2025-08-13T07:14:24.885507403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal,Uid:2fc01ff5dcb1422347fa25e055ccc6bd,Namespace:kube-system,Attempt:0,}" Aug 13 07:14:25.008807 kubelet[2179]: E0813 07:14:25.008742 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.36:6443: connect: connection refused" interval="800ms" Aug 13 07:14:25.207260 kubelet[2179]: I0813 07:14:25.207100 2179 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:25.207623 kubelet[2179]: E0813 07:14:25.207579 2179 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.36:6443/api/v1/nodes\": dial tcp 10.128.0.36:6443: connect: connection refused" node="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:25.234631 kubelet[2179]: W0813 07:14:25.234543 2179 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.36:6443: connect: connection refused Aug 13 07:14:25.234807 kubelet[2179]: E0813 07:14:25.234635 2179 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:14:25.296986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2316206171.mount: Deactivated successfully. Aug 13 07:14:25.304871 containerd[1463]: time="2025-08-13T07:14:25.304818855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:14:25.306184 containerd[1463]: time="2025-08-13T07:14:25.306114239Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Aug 13 07:14:25.307818 containerd[1463]: time="2025-08-13T07:14:25.307776249Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:14:25.309366 containerd[1463]: time="2025-08-13T07:14:25.309294855Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:14:25.310400 containerd[1463]: time="2025-08-13T07:14:25.310352459Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:14:25.311674 containerd[1463]: time="2025-08-13T07:14:25.311628605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:14:25.312174 containerd[1463]: time="2025-08-13T07:14:25.312087690Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:14:25.314152 containerd[1463]: time="2025-08-13T07:14:25.314040972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:14:25.318002 containerd[1463]: time="2025-08-13T07:14:25.316577942Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 444.855479ms" Aug 13 07:14:25.320017 containerd[1463]: time="2025-08-13T07:14:25.319711544Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 457.88283ms" Aug 13 07:14:25.324812 containerd[1463]: time="2025-08-13T07:14:25.324765767Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 439.136913ms" Aug 13 07:14:25.397072 kubelet[2179]: W0813 07:14:25.396795 2179 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.36:6443: connect: connection refused Aug 13 07:14:25.397072 kubelet[2179]: E0813 07:14:25.396890 2179 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:14:25.474142 kubelet[2179]: W0813 07:14:25.474022 2179 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.36:6443: connect: connection refused Aug 13 07:14:25.474725 kubelet[2179]: E0813 07:14:25.474288 2179 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:14:25.522137 containerd[1463]: time="2025-08-13T07:14:25.521798290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:14:25.522137 containerd[1463]: time="2025-08-13T07:14:25.521871810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:14:25.522137 containerd[1463]: time="2025-08-13T07:14:25.521900603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:25.522137 containerd[1463]: time="2025-08-13T07:14:25.522011648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:25.530647 containerd[1463]: time="2025-08-13T07:14:25.529983239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:14:25.530647 containerd[1463]: time="2025-08-13T07:14:25.530076624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:14:25.530647 containerd[1463]: time="2025-08-13T07:14:25.530133620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:25.530647 containerd[1463]: time="2025-08-13T07:14:25.530301571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:25.535957 containerd[1463]: time="2025-08-13T07:14:25.529810265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:14:25.535957 containerd[1463]: time="2025-08-13T07:14:25.535680014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:14:25.535957 containerd[1463]: time="2025-08-13T07:14:25.535704275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:25.537560 containerd[1463]: time="2025-08-13T07:14:25.535879477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:25.570755 systemd[1]: Started cri-containerd-eb4713f31b88d4c80f352b39d96ae3310f0dadc3803f57e1b390347706eaeb8b.scope - libcontainer container eb4713f31b88d4c80f352b39d96ae3310f0dadc3803f57e1b390347706eaeb8b. Aug 13 07:14:25.584238 systemd[1]: Started cri-containerd-a739b6954ce0a1c4f0d5ec0b43c431eb853b1e6cc40af185205e2814f769b472.scope - libcontainer container a739b6954ce0a1c4f0d5ec0b43c431eb853b1e6cc40af185205e2814f769b472. Aug 13 07:14:25.591932 systemd[1]: Started cri-containerd-e1281053ad88848595ed4c8efd90e49634de16b5582f11cf6b4c77074a1df0a8.scope - libcontainer container e1281053ad88848595ed4c8efd90e49634de16b5582f11cf6b4c77074a1df0a8. Aug 13 07:14:25.673107 containerd[1463]: time="2025-08-13T07:14:25.673044835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal,Uid:2fc01ff5dcb1422347fa25e055ccc6bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb4713f31b88d4c80f352b39d96ae3310f0dadc3803f57e1b390347706eaeb8b\"" Aug 13 07:14:25.676632 kubelet[2179]: E0813 07:14:25.676584 2179 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-21291" Aug 13 07:14:25.679940 containerd[1463]: time="2025-08-13T07:14:25.679892154Z" level=info msg="CreateContainer within sandbox \"eb4713f31b88d4c80f352b39d96ae3310f0dadc3803f57e1b390347706eaeb8b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:14:25.709385 containerd[1463]: time="2025-08-13T07:14:25.709325972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal,Uid:08b9e46922e4af6430f025c46f32d2ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"a739b6954ce0a1c4f0d5ec0b43c431eb853b1e6cc40af185205e2814f769b472\"" Aug 13 07:14:25.711699 kubelet[2179]: E0813 07:14:25.711477 2179 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-21291" Aug 13 07:14:25.713061 containerd[1463]: time="2025-08-13T07:14:25.712802794Z" level=info msg="CreateContainer within sandbox \"eb4713f31b88d4c80f352b39d96ae3310f0dadc3803f57e1b390347706eaeb8b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e35f06198707396d207346239edec02f0d27b05440633f4e87d7156b758b4aad\"" Aug 13 07:14:25.715512 containerd[1463]: time="2025-08-13T07:14:25.714415437Z" level=info msg="CreateContainer within sandbox \"a739b6954ce0a1c4f0d5ec0b43c431eb853b1e6cc40af185205e2814f769b472\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:14:25.715693 containerd[1463]: time="2025-08-13T07:14:25.715634172Z" level=info msg="StartContainer for \"e35f06198707396d207346239edec02f0d27b05440633f4e87d7156b758b4aad\"" Aug 13 07:14:25.728211 containerd[1463]: time="2025-08-13T07:14:25.728067472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal,Uid:99fdb9b39e1ccf2325cb7e5a30458a35,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1281053ad88848595ed4c8efd90e49634de16b5582f11cf6b4c77074a1df0a8\"" Aug 13 07:14:25.731655 kubelet[2179]: E0813 07:14:25.731610 2179 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flat" Aug 13 07:14:25.734302 containerd[1463]: time="2025-08-13T07:14:25.734263846Z" level=info msg="CreateContainer within sandbox \"e1281053ad88848595ed4c8efd90e49634de16b5582f11cf6b4c77074a1df0a8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:14:25.745046 containerd[1463]: time="2025-08-13T07:14:25.744994472Z" level=info msg="CreateContainer within sandbox \"a739b6954ce0a1c4f0d5ec0b43c431eb853b1e6cc40af185205e2814f769b472\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"42db9898ae7065002f13a4bf9763a0c4fa838a47d6464ba0a6242e70b1db8f8f\"" Aug 13 07:14:25.746567 containerd[1463]: time="2025-08-13T07:14:25.746527339Z" level=info msg="StartContainer for \"42db9898ae7065002f13a4bf9763a0c4fa838a47d6464ba0a6242e70b1db8f8f\"" Aug 13 07:14:25.761842 containerd[1463]: time="2025-08-13T07:14:25.761799178Z" level=info msg="CreateContainer within sandbox \"e1281053ad88848595ed4c8efd90e49634de16b5582f11cf6b4c77074a1df0a8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"75d631979650d055ffd3bdc3d7e9c48ee7c40913c27205b7d76e9820b627423b\"" Aug 13 07:14:25.762572 containerd[1463]: time="2025-08-13T07:14:25.762530021Z" level=info msg="StartContainer for \"75d631979650d055ffd3bdc3d7e9c48ee7c40913c27205b7d76e9820b627423b\"" Aug 13 07:14:25.768114 systemd[1]: Started cri-containerd-e35f06198707396d207346239edec02f0d27b05440633f4e87d7156b758b4aad.scope - libcontainer container e35f06198707396d207346239edec02f0d27b05440633f4e87d7156b758b4aad. Aug 13 07:14:25.800791 kubelet[2179]: W0813 07:14:25.800699 2179 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.36:6443: connect: connection refused Aug 13 07:14:25.800984 kubelet[2179]: E0813 07:14:25.800809 2179 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:14:25.811496 kubelet[2179]: E0813 07:14:25.810075 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.36:6443: connect: connection refused" interval="1.6s" Aug 13 07:14:25.828688 systemd[1]: Started cri-containerd-42db9898ae7065002f13a4bf9763a0c4fa838a47d6464ba0a6242e70b1db8f8f.scope - libcontainer container 42db9898ae7065002f13a4bf9763a0c4fa838a47d6464ba0a6242e70b1db8f8f. Aug 13 07:14:25.831200 systemd[1]: Started cri-containerd-75d631979650d055ffd3bdc3d7e9c48ee7c40913c27205b7d76e9820b627423b.scope - libcontainer container 75d631979650d055ffd3bdc3d7e9c48ee7c40913c27205b7d76e9820b627423b. Aug 13 07:14:25.882925 containerd[1463]: time="2025-08-13T07:14:25.882833989Z" level=info msg="StartContainer for \"e35f06198707396d207346239edec02f0d27b05440633f4e87d7156b758b4aad\" returns successfully" Aug 13 07:14:25.947281 containerd[1463]: time="2025-08-13T07:14:25.947051899Z" level=info msg="StartContainer for \"42db9898ae7065002f13a4bf9763a0c4fa838a47d6464ba0a6242e70b1db8f8f\" returns successfully" Aug 13 07:14:25.978685 containerd[1463]: time="2025-08-13T07:14:25.978457494Z" level=info msg="StartContainer for \"75d631979650d055ffd3bdc3d7e9c48ee7c40913c27205b7d76e9820b627423b\" returns successfully" Aug 13 07:14:26.014529 kubelet[2179]: I0813 07:14:26.014111 2179 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:26.015133 kubelet[2179]: E0813 07:14:26.015066 2179 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.36:6443/api/v1/nodes\": dial tcp 10.128.0.36:6443: connect: connection refused" node="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:27.402248 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 07:14:27.624518 kubelet[2179]: I0813 07:14:27.622273 2179 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:29.592902 kubelet[2179]: I0813 07:14:29.592852 2179 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:29.633595 kubelet[2179]: E0813 07:14:29.633252 2179 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal.185b42327c15c9af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal,UID:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal,},FirstTimestamp:2025-08-13 07:14:24.388540847 +0000 UTC m=+0.820991017,LastTimestamp:2025-08-13 07:14:24.388540847 +0000 UTC m=+0.820991017,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal,}" Aug 13 07:14:30.383127 kubelet[2179]: I0813 07:14:30.382329 2179 apiserver.go:52] "Watching apiserver" Aug 13 07:14:30.406565 kubelet[2179]: I0813 07:14:30.406423 2179 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:14:30.578384 kubelet[2179]: W0813 07:14:30.577942 2179 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 07:14:31.533506 systemd[1]: Reloading requested from client PID 2456 ('systemctl') (unit session-7.scope)... Aug 13 07:14:31.533528 systemd[1]: Reloading... Aug 13 07:14:31.652511 zram_generator::config[2492]: No configuration found. Aug 13 07:14:31.817055 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:14:31.940106 systemd[1]: Reloading finished in 405 ms. Aug 13 07:14:31.991109 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:14:32.000074 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:14:32.000374 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:14:32.000459 systemd[1]: kubelet.service: Consumed 1.344s CPU time, 131.2M memory peak, 0B memory swap peak. Aug 13 07:14:32.007960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:14:32.288538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:14:32.303133 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:14:32.375030 kubelet[2544]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:14:32.375030 kubelet[2544]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:14:32.375030 kubelet[2544]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:14:32.375747 kubelet[2544]: I0813 07:14:32.375016 2544 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:14:32.386499 kubelet[2544]: I0813 07:14:32.384872 2544 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:14:32.386499 kubelet[2544]: I0813 07:14:32.384913 2544 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:14:32.386499 kubelet[2544]: I0813 07:14:32.385525 2544 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:14:32.389403 kubelet[2544]: I0813 07:14:32.389220 2544 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:14:32.391940 kubelet[2544]: I0813 07:14:32.391905 2544 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:14:32.396453 kubelet[2544]: E0813 07:14:32.396419 2544 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:14:32.396604 kubelet[2544]: I0813 07:14:32.396586 2544 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:14:32.400390 kubelet[2544]: I0813 07:14:32.400359 2544 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:14:32.400588 kubelet[2544]: I0813 07:14:32.400565 2544 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:14:32.400857 kubelet[2544]: I0813 07:14:32.400794 2544 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:14:32.401097 kubelet[2544]: I0813 07:14:32.400842 2544 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:14:32.401257 kubelet[2544]: I0813 07:14:32.401098 2544 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:14:32.401257 kubelet[2544]: I0813 07:14:32.401118 2544 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:14:32.401257 kubelet[2544]: I0813 07:14:32.401157 2544 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:14:32.401427 kubelet[2544]: I0813 07:14:32.401306 2544 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:14:32.401427 kubelet[2544]: I0813 07:14:32.401325 2544 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:14:32.401427 kubelet[2544]: I0813 07:14:32.401366 2544 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:14:32.401427 kubelet[2544]: I0813 07:14:32.401382 2544 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:14:32.402333 kubelet[2544]: I0813 07:14:32.402040 2544 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:14:32.403007 kubelet[2544]: I0813 07:14:32.402739 2544 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:14:32.403449 kubelet[2544]: I0813 07:14:32.403424 2544 server.go:1274] "Started kubelet" Aug 13 07:14:32.406166 kubelet[2544]: I0813 07:14:32.406138 2544 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:14:32.411493 kubelet[2544]: I0813 07:14:32.410740 2544 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:14:32.417489 kubelet[2544]: I0813 07:14:32.415312 2544 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:14:32.419418 kubelet[2544]: I0813 07:14:32.419360 2544 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:14:32.420727 kubelet[2544]: I0813 07:14:32.420691 2544 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:14:32.423654 kubelet[2544]: I0813 07:14:32.422772 2544 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:14:32.430434 kubelet[2544]: I0813 07:14:32.427862 2544 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:14:32.430434 kubelet[2544]: E0813 07:14:32.429619 2544 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" not found" Aug 13 07:14:32.430434 kubelet[2544]: I0813 07:14:32.430327 2544 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:14:32.430842 kubelet[2544]: I0813 07:14:32.430822 2544 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:14:32.446520 kubelet[2544]: I0813 07:14:32.445919 2544 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:14:32.450590 kubelet[2544]: I0813 07:14:32.450548 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:14:32.454908 kubelet[2544]: I0813 07:14:32.454870 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:14:32.454908 kubelet[2544]: I0813 07:14:32.454911 2544 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:14:32.455108 kubelet[2544]: I0813 07:14:32.454935 2544 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:14:32.455108 kubelet[2544]: E0813 07:14:32.454997 2544 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:14:32.468274 kubelet[2544]: I0813 07:14:32.468235 2544 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:14:32.468274 kubelet[2544]: I0813 07:14:32.468270 2544 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:14:32.538654 kubelet[2544]: I0813 07:14:32.538621 2544 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:14:32.538837 kubelet[2544]: I0813 07:14:32.538821 2544 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:14:32.538952 kubelet[2544]: I0813 07:14:32.538941 2544 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:14:32.539314 kubelet[2544]: I0813 07:14:32.539292 2544 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:14:32.541264 kubelet[2544]: I0813 07:14:32.541195 2544 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:14:32.541433 kubelet[2544]: I0813 07:14:32.541290 2544 policy_none.go:49] "None policy: Start" Aug 13 07:14:32.542298 kubelet[2544]: I0813 07:14:32.542277 2544 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:14:32.542442 kubelet[2544]: I0813 07:14:32.542428 2544 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:14:32.542780 kubelet[2544]: I0813 07:14:32.542735 2544 state_mem.go:75] "Updated machine memory state" Aug 13 07:14:32.551232 kubelet[2544]: I0813 07:14:32.551066 2544 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:14:32.551329 kubelet[2544]: I0813 07:14:32.551290 2544 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:14:32.551394 kubelet[2544]: I0813 07:14:32.551307 2544 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:14:32.552066 kubelet[2544]: I0813 07:14:32.551892 2544 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:14:32.567016 kubelet[2544]: W0813 07:14:32.566681 2544 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 07:14:32.571372 kubelet[2544]: W0813 07:14:32.571335 2544 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 07:14:32.576291 kubelet[2544]: W0813 07:14:32.576265 2544 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 07:14:32.576551 kubelet[2544]: E0813 07:14:32.576526 2544 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:32.631737 kubelet[2544]: I0813 07:14:32.631676 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08b9e46922e4af6430f025c46f32d2ee-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"08b9e46922e4af6430f025c46f32d2ee\") " pod="kube-system/kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:32.631737 kubelet[2544]: I0813 07:14:32.631734 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99fdb9b39e1ccf2325cb7e5a30458a35-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"99fdb9b39e1ccf2325cb7e5a30458a35\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:32.631996 kubelet[2544]: I0813 07:14:32.631772 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fc01ff5dcb1422347fa25e055ccc6bd-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"2fc01ff5dcb1422347fa25e055ccc6bd\") " pod="kube-system/kube-scheduler-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:32.631996 kubelet[2544]: I0813 07:14:32.631811 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08b9e46922e4af6430f025c46f32d2ee-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"08b9e46922e4af6430f025c46f32d2ee\") " pod="kube-system/kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:32.631996 kubelet[2544]: I0813 07:14:32.631842 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08b9e46922e4af6430f025c46f32d2ee-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"08b9e46922e4af6430f025c46f32d2ee\") " pod="kube-system/kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:32.631996 kubelet[2544]: I0813 07:14:32.631869 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99fdb9b39e1ccf2325cb7e5a30458a35-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"99fdb9b39e1ccf2325cb7e5a30458a35\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:32.632228 kubelet[2544]: I0813 07:14:32.631894 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/99fdb9b39e1ccf2325cb7e5a30458a35-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"99fdb9b39e1ccf2325cb7e5a30458a35\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:32.632228 kubelet[2544]: I0813 07:14:32.631920 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99fdb9b39e1ccf2325cb7e5a30458a35-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"99fdb9b39e1ccf2325cb7e5a30458a35\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:32.632228 kubelet[2544]: I0813 07:14:32.631950 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99fdb9b39e1ccf2325cb7e5a30458a35-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" (UID: \"99fdb9b39e1ccf2325cb7e5a30458a35\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:32.668807 kubelet[2544]: I0813 07:14:32.668761 2544 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:32.680667 kubelet[2544]: I0813 07:14:32.679816 2544 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:32.680667 kubelet[2544]: I0813 07:14:32.679988 2544 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:14:33.414014 kubelet[2544]: I0813 07:14:33.412574 2544 apiserver.go:52] "Watching apiserver" Aug 13 07:14:33.431289 kubelet[2544]: I0813 07:14:33.431193 2544 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:14:33.554144 kubelet[2544]: I0813 07:14:33.554029 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" podStartSLOduration=1.553979928 podStartE2EDuration="1.553979928s" podCreationTimestamp="2025-08-13 07:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:14:33.552367259 +0000 UTC m=+1.242988488" watchObservedRunningTime="2025-08-13 07:14:33.553979928 +0000 UTC m=+1.244601158" Aug 13 07:14:33.554369 kubelet[2544]: I0813 07:14:33.554278 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" podStartSLOduration=1.5542463560000002 podStartE2EDuration="1.554246356s" podCreationTimestamp="2025-08-13 07:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:14:33.5394267 +0000 UTC m=+1.230047927" watchObservedRunningTime="2025-08-13 07:14:33.554246356 +0000 UTC m=+1.244867578" Aug 13 07:14:36.439871 kubelet[2544]: I0813 07:14:36.439776 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" podStartSLOduration=6.4397539120000005 podStartE2EDuration="6.439753912s" podCreationTimestamp="2025-08-13 07:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:14:33.569555625 +0000 UTC m=+1.260176866" watchObservedRunningTime="2025-08-13 07:14:36.439753912 +0000 UTC m=+4.130375138" Aug 13 07:14:37.602173 kubelet[2544]: I0813 07:14:37.602107 2544 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:14:37.602934 containerd[1463]: time="2025-08-13T07:14:37.602626572Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:14:37.603347 kubelet[2544]: I0813 07:14:37.602931 2544 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:14:38.299312 systemd[1]: Created slice kubepods-besteffort-podca31da64_ece4_4a7b_b7ed_3f8700732784.slice - libcontainer container kubepods-besteffort-podca31da64_ece4_4a7b_b7ed_3f8700732784.slice. Aug 13 07:14:38.370298 kubelet[2544]: I0813 07:14:38.370085 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca31da64-ece4-4a7b-b7ed-3f8700732784-kube-proxy\") pod \"kube-proxy-bzw5v\" (UID: \"ca31da64-ece4-4a7b-b7ed-3f8700732784\") " pod="kube-system/kube-proxy-bzw5v" Aug 13 07:14:38.370298 kubelet[2544]: I0813 07:14:38.370136 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca31da64-ece4-4a7b-b7ed-3f8700732784-xtables-lock\") pod \"kube-proxy-bzw5v\" (UID: \"ca31da64-ece4-4a7b-b7ed-3f8700732784\") " pod="kube-system/kube-proxy-bzw5v" Aug 13 07:14:38.370298 kubelet[2544]: I0813 07:14:38.370190 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca31da64-ece4-4a7b-b7ed-3f8700732784-lib-modules\") pod \"kube-proxy-bzw5v\" (UID: \"ca31da64-ece4-4a7b-b7ed-3f8700732784\") " pod="kube-system/kube-proxy-bzw5v" Aug 13 07:14:38.370298 kubelet[2544]: I0813 07:14:38.370219 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nrgb\" (UniqueName: \"kubernetes.io/projected/ca31da64-ece4-4a7b-b7ed-3f8700732784-kube-api-access-5nrgb\") pod \"kube-proxy-bzw5v\" (UID: \"ca31da64-ece4-4a7b-b7ed-3f8700732784\") " pod="kube-system/kube-proxy-bzw5v" Aug 13 07:14:38.611307 containerd[1463]: time="2025-08-13T07:14:38.611165547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzw5v,Uid:ca31da64-ece4-4a7b-b7ed-3f8700732784,Namespace:kube-system,Attempt:0,}" Aug 13 07:14:38.645522 containerd[1463]: time="2025-08-13T07:14:38.645360932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:14:38.646591 containerd[1463]: time="2025-08-13T07:14:38.646364380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:14:38.646591 containerd[1463]: time="2025-08-13T07:14:38.646398660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:38.646591 containerd[1463]: time="2025-08-13T07:14:38.646535361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:38.683373 systemd[1]: run-containerd-runc-k8s.io-2106a31e05c73ff33058ca43584a5a69ac2489df73c63441cbb34c5d6e8489ad-runc.CmDQSN.mount: Deactivated successfully. Aug 13 07:14:38.701702 systemd[1]: Started cri-containerd-2106a31e05c73ff33058ca43584a5a69ac2489df73c63441cbb34c5d6e8489ad.scope - libcontainer container 2106a31e05c73ff33058ca43584a5a69ac2489df73c63441cbb34c5d6e8489ad. Aug 13 07:14:38.773549 kubelet[2544]: I0813 07:14:38.773363 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd8hf\" (UniqueName: \"kubernetes.io/projected/10c1336d-eb8c-4f71-9fbf-efa98dac9a1d-kube-api-access-qd8hf\") pod \"tigera-operator-5bf8dfcb4-gv2c2\" (UID: \"10c1336d-eb8c-4f71-9fbf-efa98dac9a1d\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-gv2c2" Aug 13 07:14:38.773549 kubelet[2544]: I0813 07:14:38.773458 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/10c1336d-eb8c-4f71-9fbf-efa98dac9a1d-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-gv2c2\" (UID: \"10c1336d-eb8c-4f71-9fbf-efa98dac9a1d\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-gv2c2" Aug 13 07:14:38.781933 containerd[1463]: time="2025-08-13T07:14:38.780119030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzw5v,Uid:ca31da64-ece4-4a7b-b7ed-3f8700732784,Namespace:kube-system,Attempt:0,} returns sandbox id \"2106a31e05c73ff33058ca43584a5a69ac2489df73c63441cbb34c5d6e8489ad\"" Aug 13 07:14:38.784107 systemd[1]: Created slice kubepods-besteffort-pod10c1336d_eb8c_4f71_9fbf_efa98dac9a1d.slice - libcontainer container kubepods-besteffort-pod10c1336d_eb8c_4f71_9fbf_efa98dac9a1d.slice. Aug 13 07:14:38.789886 containerd[1463]: time="2025-08-13T07:14:38.789841825Z" level=info msg="CreateContainer within sandbox \"2106a31e05c73ff33058ca43584a5a69ac2489df73c63441cbb34c5d6e8489ad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:14:38.809053 containerd[1463]: time="2025-08-13T07:14:38.809003717Z" level=info msg="CreateContainer within sandbox \"2106a31e05c73ff33058ca43584a5a69ac2489df73c63441cbb34c5d6e8489ad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4b8475adeac09d26ec16df7e66f036188278e7f38b6f5d85760e463a43101f55\"" Aug 13 07:14:38.810743 containerd[1463]: time="2025-08-13T07:14:38.809784278Z" level=info msg="StartContainer for \"4b8475adeac09d26ec16df7e66f036188278e7f38b6f5d85760e463a43101f55\"" Aug 13 07:14:38.847685 systemd[1]: Started cri-containerd-4b8475adeac09d26ec16df7e66f036188278e7f38b6f5d85760e463a43101f55.scope - libcontainer container 4b8475adeac09d26ec16df7e66f036188278e7f38b6f5d85760e463a43101f55. Aug 13 07:14:38.896122 containerd[1463]: time="2025-08-13T07:14:38.895855148Z" level=info msg="StartContainer for \"4b8475adeac09d26ec16df7e66f036188278e7f38b6f5d85760e463a43101f55\" returns successfully" Aug 13 07:14:39.092253 containerd[1463]: time="2025-08-13T07:14:39.092179673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-gv2c2,Uid:10c1336d-eb8c-4f71-9fbf-efa98dac9a1d,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:14:39.129735 containerd[1463]: time="2025-08-13T07:14:39.129372904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:14:39.129735 containerd[1463]: time="2025-08-13T07:14:39.129452571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:14:39.129735 containerd[1463]: time="2025-08-13T07:14:39.129501062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:39.129735 containerd[1463]: time="2025-08-13T07:14:39.129640929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:39.158961 systemd[1]: Started cri-containerd-5eafd45dd49f3b9c4e15059b4ff8838f44e055b9df004cd3551889c11aee8cc6.scope - libcontainer container 5eafd45dd49f3b9c4e15059b4ff8838f44e055b9df004cd3551889c11aee8cc6. Aug 13 07:14:39.239402 containerd[1463]: time="2025-08-13T07:14:39.239346628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-gv2c2,Uid:10c1336d-eb8c-4f71-9fbf-efa98dac9a1d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5eafd45dd49f3b9c4e15059b4ff8838f44e055b9df004cd3551889c11aee8cc6\"" Aug 13 07:14:39.242813 containerd[1463]: time="2025-08-13T07:14:39.242760740Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:14:40.075018 kubelet[2544]: I0813 07:14:40.074940 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bzw5v" podStartSLOduration=2.074915373 podStartE2EDuration="2.074915373s" podCreationTimestamp="2025-08-13 07:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:14:39.527967456 +0000 UTC m=+7.218588684" watchObservedRunningTime="2025-08-13 07:14:40.074915373 +0000 UTC m=+7.765536602" Aug 13 07:14:40.217426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745299991.mount: Deactivated successfully. Aug 13 07:14:41.090315 containerd[1463]: time="2025-08-13T07:14:41.090202053Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:41.091867 containerd[1463]: time="2025-08-13T07:14:41.091798630Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:14:41.093253 containerd[1463]: time="2025-08-13T07:14:41.093188963Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:41.096348 containerd[1463]: time="2025-08-13T07:14:41.096270876Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:41.097407 containerd[1463]: time="2025-08-13T07:14:41.097234263Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.854419786s" Aug 13 07:14:41.097407 containerd[1463]: time="2025-08-13T07:14:41.097281951Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:14:41.100485 containerd[1463]: time="2025-08-13T07:14:41.100201559Z" level=info msg="CreateContainer within sandbox \"5eafd45dd49f3b9c4e15059b4ff8838f44e055b9df004cd3551889c11aee8cc6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:14:41.116791 containerd[1463]: time="2025-08-13T07:14:41.116631152Z" level=info msg="CreateContainer within sandbox \"5eafd45dd49f3b9c4e15059b4ff8838f44e055b9df004cd3551889c11aee8cc6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c7328f5e9b1deef000789601fae5129cf6353b947bf45e824ea7ab26367c77e1\"" Aug 13 07:14:41.118949 containerd[1463]: time="2025-08-13T07:14:41.118023628Z" level=info msg="StartContainer for \"c7328f5e9b1deef000789601fae5129cf6353b947bf45e824ea7ab26367c77e1\"" Aug 13 07:14:41.161685 systemd[1]: run-containerd-runc-k8s.io-c7328f5e9b1deef000789601fae5129cf6353b947bf45e824ea7ab26367c77e1-runc.ZqibE5.mount: Deactivated successfully. Aug 13 07:14:41.170706 systemd[1]: Started cri-containerd-c7328f5e9b1deef000789601fae5129cf6353b947bf45e824ea7ab26367c77e1.scope - libcontainer container c7328f5e9b1deef000789601fae5129cf6353b947bf45e824ea7ab26367c77e1. Aug 13 07:14:41.207574 containerd[1463]: time="2025-08-13T07:14:41.207404816Z" level=info msg="StartContainer for \"c7328f5e9b1deef000789601fae5129cf6353b947bf45e824ea7ab26367c77e1\" returns successfully" Aug 13 07:14:41.548576 kubelet[2544]: I0813 07:14:41.548273 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-gv2c2" podStartSLOduration=1.691398919 podStartE2EDuration="3.548244454s" podCreationTimestamp="2025-08-13 07:14:38 +0000 UTC" firstStartedPulling="2025-08-13 07:14:39.241837791 +0000 UTC m=+6.932459009" lastFinishedPulling="2025-08-13 07:14:41.098683327 +0000 UTC m=+8.789304544" observedRunningTime="2025-08-13 07:14:41.547598529 +0000 UTC m=+9.238219761" watchObservedRunningTime="2025-08-13 07:14:41.548244454 +0000 UTC m=+9.238865682" Aug 13 07:14:41.781119 update_engine[1447]: I20250813 07:14:41.780997 1447 update_attempter.cc:509] Updating boot flags... Aug 13 07:14:41.849517 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2889) Aug 13 07:14:41.963518 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2888) Aug 13 07:14:48.199846 sudo[1711]: pam_unix(sudo:session): session closed for user root Aug 13 07:14:48.246859 sshd[1708]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:48.259911 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:14:48.262007 systemd[1]: sshd@6-10.128.0.36:22-139.178.68.195:36334.service: Deactivated successfully. Aug 13 07:14:48.266115 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:14:48.266918 systemd[1]: session-7.scope: Consumed 7.355s CPU time, 154.1M memory peak, 0B memory swap peak. Aug 13 07:14:48.268917 systemd-logind[1440]: Removed session 7. Aug 13 07:14:53.824169 systemd[1]: Created slice kubepods-besteffort-pod11787120_81eb_440e_a02f_bee6e7dffac6.slice - libcontainer container kubepods-besteffort-pod11787120_81eb_440e_a02f_bee6e7dffac6.slice. Aug 13 07:14:53.875056 kubelet[2544]: I0813 07:14:53.875000 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/11787120-81eb-440e-a02f-bee6e7dffac6-typha-certs\") pod \"calico-typha-79f955bcf7-l9h5c\" (UID: \"11787120-81eb-440e-a02f-bee6e7dffac6\") " pod="calico-system/calico-typha-79f955bcf7-l9h5c" Aug 13 07:14:53.875735 kubelet[2544]: I0813 07:14:53.875066 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzk2c\" (UniqueName: \"kubernetes.io/projected/11787120-81eb-440e-a02f-bee6e7dffac6-kube-api-access-gzk2c\") pod \"calico-typha-79f955bcf7-l9h5c\" (UID: \"11787120-81eb-440e-a02f-bee6e7dffac6\") " pod="calico-system/calico-typha-79f955bcf7-l9h5c" Aug 13 07:14:53.875735 kubelet[2544]: I0813 07:14:53.875097 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11787120-81eb-440e-a02f-bee6e7dffac6-tigera-ca-bundle\") pod \"calico-typha-79f955bcf7-l9h5c\" (UID: \"11787120-81eb-440e-a02f-bee6e7dffac6\") " pod="calico-system/calico-typha-79f955bcf7-l9h5c" Aug 13 07:14:54.131024 containerd[1463]: time="2025-08-13T07:14:54.130371833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79f955bcf7-l9h5c,Uid:11787120-81eb-440e-a02f-bee6e7dffac6,Namespace:calico-system,Attempt:0,}" Aug 13 07:14:54.184592 containerd[1463]: time="2025-08-13T07:14:54.183164688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:14:54.184592 containerd[1463]: time="2025-08-13T07:14:54.184379659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:14:54.184592 containerd[1463]: time="2025-08-13T07:14:54.184404736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:54.185408 containerd[1463]: time="2025-08-13T07:14:54.185035775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:54.248717 systemd[1]: Started cri-containerd-d975976e988b2fcffe69df2cf7cc64aaf495050ef8aa1d6a3df56550d476cea6.scope - libcontainer container d975976e988b2fcffe69df2cf7cc64aaf495050ef8aa1d6a3df56550d476cea6. Aug 13 07:14:54.267642 systemd[1]: Created slice kubepods-besteffort-podd483117c_aada_4cb0_94ca_fa2e7d705d3c.slice - libcontainer container kubepods-besteffort-podd483117c_aada_4cb0_94ca_fa2e7d705d3c.slice. Aug 13 07:14:54.278120 kubelet[2544]: I0813 07:14:54.277536 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d483117c-aada-4cb0-94ca-fa2e7d705d3c-node-certs\") pod \"calico-node-2fglh\" (UID: \"d483117c-aada-4cb0-94ca-fa2e7d705d3c\") " pod="calico-system/calico-node-2fglh" Aug 13 07:14:54.278120 kubelet[2544]: I0813 07:14:54.277603 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d483117c-aada-4cb0-94ca-fa2e7d705d3c-cni-net-dir\") pod \"calico-node-2fglh\" (UID: \"d483117c-aada-4cb0-94ca-fa2e7d705d3c\") " pod="calico-system/calico-node-2fglh" Aug 13 07:14:54.278120 kubelet[2544]: I0813 07:14:54.277637 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d483117c-aada-4cb0-94ca-fa2e7d705d3c-var-run-calico\") pod \"calico-node-2fglh\" (UID: \"d483117c-aada-4cb0-94ca-fa2e7d705d3c\") " pod="calico-system/calico-node-2fglh" Aug 13 07:14:54.278120 kubelet[2544]: I0813 07:14:54.277663 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d483117c-aada-4cb0-94ca-fa2e7d705d3c-xtables-lock\") pod \"calico-node-2fglh\" (UID: \"d483117c-aada-4cb0-94ca-fa2e7d705d3c\") " pod="calico-system/calico-node-2fglh" Aug 13 07:14:54.278120 kubelet[2544]: I0813 07:14:54.277692 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d483117c-aada-4cb0-94ca-fa2e7d705d3c-var-lib-calico\") pod \"calico-node-2fglh\" (UID: \"d483117c-aada-4cb0-94ca-fa2e7d705d3c\") " pod="calico-system/calico-node-2fglh" Aug 13 07:14:54.278772 kubelet[2544]: I0813 07:14:54.277721 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d483117c-aada-4cb0-94ca-fa2e7d705d3c-cni-log-dir\") pod \"calico-node-2fglh\" (UID: \"d483117c-aada-4cb0-94ca-fa2e7d705d3c\") " pod="calico-system/calico-node-2fglh" Aug 13 07:14:54.278772 kubelet[2544]: I0813 07:14:54.277748 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d483117c-aada-4cb0-94ca-fa2e7d705d3c-flexvol-driver-host\") pod \"calico-node-2fglh\" (UID: \"d483117c-aada-4cb0-94ca-fa2e7d705d3c\") " pod="calico-system/calico-node-2fglh" Aug 13 07:14:54.278772 kubelet[2544]: I0813 07:14:54.277775 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d483117c-aada-4cb0-94ca-fa2e7d705d3c-lib-modules\") pod \"calico-node-2fglh\" (UID: \"d483117c-aada-4cb0-94ca-fa2e7d705d3c\") " pod="calico-system/calico-node-2fglh" Aug 13 07:14:54.278772 kubelet[2544]: I0813 07:14:54.277809 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d483117c-aada-4cb0-94ca-fa2e7d705d3c-tigera-ca-bundle\") pod \"calico-node-2fglh\" (UID: \"d483117c-aada-4cb0-94ca-fa2e7d705d3c\") " pod="calico-system/calico-node-2fglh" Aug 13 07:14:54.278772 kubelet[2544]: I0813 07:14:54.277836 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzlhw\" (UniqueName: \"kubernetes.io/projected/d483117c-aada-4cb0-94ca-fa2e7d705d3c-kube-api-access-bzlhw\") pod \"calico-node-2fglh\" (UID: \"d483117c-aada-4cb0-94ca-fa2e7d705d3c\") " pod="calico-system/calico-node-2fglh" Aug 13 07:14:54.279081 kubelet[2544]: I0813 07:14:54.277869 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d483117c-aada-4cb0-94ca-fa2e7d705d3c-cni-bin-dir\") pod \"calico-node-2fglh\" (UID: \"d483117c-aada-4cb0-94ca-fa2e7d705d3c\") " pod="calico-system/calico-node-2fglh" Aug 13 07:14:54.279081 kubelet[2544]: I0813 07:14:54.277895 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d483117c-aada-4cb0-94ca-fa2e7d705d3c-policysync\") pod \"calico-node-2fglh\" (UID: \"d483117c-aada-4cb0-94ca-fa2e7d705d3c\") " pod="calico-system/calico-node-2fglh" Aug 13 07:14:54.385749 kubelet[2544]: E0813 07:14:54.385034 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.385749 kubelet[2544]: W0813 07:14:54.385067 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.385749 kubelet[2544]: E0813 07:14:54.385097 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.393711 containerd[1463]: time="2025-08-13T07:14:54.393286058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79f955bcf7-l9h5c,Uid:11787120-81eb-440e-a02f-bee6e7dffac6,Namespace:calico-system,Attempt:0,} returns sandbox id \"d975976e988b2fcffe69df2cf7cc64aaf495050ef8aa1d6a3df56550d476cea6\"" Aug 13 07:14:54.393882 kubelet[2544]: E0813 07:14:54.393494 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.393882 kubelet[2544]: W0813 07:14:54.393519 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.393882 kubelet[2544]: E0813 07:14:54.393570 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.395553 kubelet[2544]: E0813 07:14:54.394940 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.395553 kubelet[2544]: W0813 07:14:54.395004 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.396436 kubelet[2544]: E0813 07:14:54.396345 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.398404 kubelet[2544]: E0813 07:14:54.397343 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.398404 kubelet[2544]: W0813 07:14:54.397362 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.398404 kubelet[2544]: E0813 07:14:54.397461 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.399998 kubelet[2544]: E0813 07:14:54.399954 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.399998 kubelet[2544]: W0813 07:14:54.399997 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.400409 kubelet[2544]: E0813 07:14:54.400035 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.402204 kubelet[2544]: E0813 07:14:54.401936 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.402204 kubelet[2544]: W0813 07:14:54.401961 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.402204 kubelet[2544]: E0813 07:14:54.401981 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.403517 containerd[1463]: time="2025-08-13T07:14:54.403382891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:14:54.422905 kubelet[2544]: E0813 07:14:54.422671 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.422905 kubelet[2544]: W0813 07:14:54.422786 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.422905 kubelet[2544]: E0813 07:14:54.422830 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.449188 kubelet[2544]: E0813 07:14:54.449006 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9swvw" podUID="707e0f61-36a7-4b17-96a3-b98a0613f9bd" Aug 13 07:14:54.479417 kubelet[2544]: E0813 07:14:54.478988 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.479417 kubelet[2544]: W0813 07:14:54.479024 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.479417 kubelet[2544]: E0813 07:14:54.479056 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.482644 kubelet[2544]: E0813 07:14:54.481028 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.482644 kubelet[2544]: W0813 07:14:54.481050 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.482644 kubelet[2544]: E0813 07:14:54.481070 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.483443 kubelet[2544]: E0813 07:14:54.483047 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.483443 kubelet[2544]: W0813 07:14:54.483069 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.483443 kubelet[2544]: E0813 07:14:54.483090 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.484441 kubelet[2544]: E0813 07:14:54.483860 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.484441 kubelet[2544]: W0813 07:14:54.483879 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.484441 kubelet[2544]: E0813 07:14:54.483898 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.485574 kubelet[2544]: E0813 07:14:54.484504 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.485574 kubelet[2544]: W0813 07:14:54.484520 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.485574 kubelet[2544]: E0813 07:14:54.484539 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.487058 kubelet[2544]: E0813 07:14:54.486635 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.487058 kubelet[2544]: W0813 07:14:54.486659 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.487058 kubelet[2544]: E0813 07:14:54.486676 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.487728 kubelet[2544]: E0813 07:14:54.487412 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.487728 kubelet[2544]: W0813 07:14:54.487432 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.487728 kubelet[2544]: E0813 07:14:54.487451 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.489007 kubelet[2544]: E0813 07:14:54.488579 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.489007 kubelet[2544]: W0813 07:14:54.488603 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.489007 kubelet[2544]: E0813 07:14:54.488620 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.490490 kubelet[2544]: E0813 07:14:54.489593 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.490490 kubelet[2544]: W0813 07:14:54.489617 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.490490 kubelet[2544]: E0813 07:14:54.489636 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.490708 kubelet[2544]: E0813 07:14:54.490569 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.490708 kubelet[2544]: W0813 07:14:54.490585 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.490708 kubelet[2544]: E0813 07:14:54.490602 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.492875 kubelet[2544]: E0813 07:14:54.491594 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.492875 kubelet[2544]: W0813 07:14:54.491613 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.492875 kubelet[2544]: E0813 07:14:54.491630 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.492875 kubelet[2544]: E0813 07:14:54.492619 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.492875 kubelet[2544]: W0813 07:14:54.492639 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.492875 kubelet[2544]: E0813 07:14:54.492683 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.496441 kubelet[2544]: E0813 07:14:54.495034 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.496441 kubelet[2544]: W0813 07:14:54.495054 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.496441 kubelet[2544]: E0813 07:14:54.495071 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.496941 kubelet[2544]: E0813 07:14:54.496924 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.497124 kubelet[2544]: W0813 07:14:54.497038 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.497124 kubelet[2544]: E0813 07:14:54.497060 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.497644 kubelet[2544]: E0813 07:14:54.497507 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.497644 kubelet[2544]: W0813 07:14:54.497524 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.497644 kubelet[2544]: E0813 07:14:54.497540 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.499819 kubelet[2544]: E0813 07:14:54.499429 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.499819 kubelet[2544]: W0813 07:14:54.499449 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.499819 kubelet[2544]: E0813 07:14:54.499500 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.500121 kubelet[2544]: E0813 07:14:54.500090 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.500309 kubelet[2544]: W0813 07:14:54.500219 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.500309 kubelet[2544]: E0813 07:14:54.500245 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.500913 kubelet[2544]: E0813 07:14:54.500767 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.500913 kubelet[2544]: W0813 07:14:54.500786 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.500913 kubelet[2544]: E0813 07:14:54.500802 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.501507 kubelet[2544]: E0813 07:14:54.501313 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.501507 kubelet[2544]: W0813 07:14:54.501340 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.501507 kubelet[2544]: E0813 07:14:54.501357 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.502166 kubelet[2544]: E0813 07:14:54.502148 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.502315 kubelet[2544]: W0813 07:14:54.502255 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.502315 kubelet[2544]: E0813 07:14:54.502278 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.503007 kubelet[2544]: E0813 07:14:54.502991 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.503125 kubelet[2544]: W0813 07:14:54.503108 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.503383 kubelet[2544]: E0813 07:14:54.503198 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.503383 kubelet[2544]: I0813 07:14:54.503240 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccdr6\" (UniqueName: \"kubernetes.io/projected/707e0f61-36a7-4b17-96a3-b98a0613f9bd-kube-api-access-ccdr6\") pod \"csi-node-driver-9swvw\" (UID: \"707e0f61-36a7-4b17-96a3-b98a0613f9bd\") " pod="calico-system/csi-node-driver-9swvw" Aug 13 07:14:54.504222 kubelet[2544]: E0813 07:14:54.504200 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.504635 kubelet[2544]: W0813 07:14:54.504338 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.504873 kubelet[2544]: E0813 07:14:54.504853 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.505179 kubelet[2544]: I0813 07:14:54.505020 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/707e0f61-36a7-4b17-96a3-b98a0613f9bd-varrun\") pod \"csi-node-driver-9swvw\" (UID: \"707e0f61-36a7-4b17-96a3-b98a0613f9bd\") " pod="calico-system/csi-node-driver-9swvw" Aug 13 07:14:54.505710 kubelet[2544]: E0813 07:14:54.505689 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.505912 kubelet[2544]: W0813 07:14:54.505802 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.506170 kubelet[2544]: E0813 07:14:54.506150 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.506539 kubelet[2544]: I0813 07:14:54.506364 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/707e0f61-36a7-4b17-96a3-b98a0613f9bd-registration-dir\") pod \"csi-node-driver-9swvw\" (UID: \"707e0f61-36a7-4b17-96a3-b98a0613f9bd\") " pod="calico-system/csi-node-driver-9swvw" Aug 13 07:14:54.506539 kubelet[2544]: E0813 07:14:54.506302 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.506539 kubelet[2544]: W0813 07:14:54.506405 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.506539 kubelet[2544]: E0813 07:14:54.506420 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.509561 kubelet[2544]: E0813 07:14:54.507047 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.509561 kubelet[2544]: W0813 07:14:54.507065 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.509561 kubelet[2544]: E0813 07:14:54.507100 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.510818 kubelet[2544]: E0813 07:14:54.510682 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.510818 kubelet[2544]: W0813 07:14:54.510702 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.511135 kubelet[2544]: E0813 07:14:54.510985 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.511447 kubelet[2544]: E0813 07:14:54.511299 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.511447 kubelet[2544]: W0813 07:14:54.511318 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.511703 kubelet[2544]: E0813 07:14:54.511636 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.511703 kubelet[2544]: I0813 07:14:54.511679 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/707e0f61-36a7-4b17-96a3-b98a0613f9bd-kubelet-dir\") pod \"csi-node-driver-9swvw\" (UID: \"707e0f61-36a7-4b17-96a3-b98a0613f9bd\") " pod="calico-system/csi-node-driver-9swvw" Aug 13 07:14:54.512140 kubelet[2544]: E0813 07:14:54.512028 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.512140 kubelet[2544]: W0813 07:14:54.512046 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.512553 kubelet[2544]: E0813 07:14:54.512288 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.512988 kubelet[2544]: E0813 07:14:54.512969 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.513103 kubelet[2544]: W0813 07:14:54.513085 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.513227 kubelet[2544]: E0813 07:14:54.513208 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.513763 kubelet[2544]: E0813 07:14:54.513712 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.513763 kubelet[2544]: W0813 07:14:54.513730 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.514081 kubelet[2544]: E0813 07:14:54.513831 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.514458 kubelet[2544]: E0813 07:14:54.514401 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.514458 kubelet[2544]: W0813 07:14:54.514420 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.514458 kubelet[2544]: E0813 07:14:54.514437 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.515997 kubelet[2544]: E0813 07:14:54.515806 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.515997 kubelet[2544]: W0813 07:14:54.515825 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.515997 kubelet[2544]: E0813 07:14:54.515843 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.516983 kubelet[2544]: E0813 07:14:54.516771 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.516983 kubelet[2544]: W0813 07:14:54.516789 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.516983 kubelet[2544]: E0813 07:14:54.516807 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.516983 kubelet[2544]: I0813 07:14:54.516840 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/707e0f61-36a7-4b17-96a3-b98a0613f9bd-socket-dir\") pod \"csi-node-driver-9swvw\" (UID: \"707e0f61-36a7-4b17-96a3-b98a0613f9bd\") " pod="calico-system/csi-node-driver-9swvw" Aug 13 07:14:54.517597 kubelet[2544]: E0813 07:14:54.517569 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.517597 kubelet[2544]: W0813 07:14:54.517595 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.517747 kubelet[2544]: E0813 07:14:54.517613 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.518604 kubelet[2544]: E0813 07:14:54.518578 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.518604 kubelet[2544]: W0813 07:14:54.518602 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.518765 kubelet[2544]: E0813 07:14:54.518620 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.573072 containerd[1463]: time="2025-08-13T07:14:54.572982512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2fglh,Uid:d483117c-aada-4cb0-94ca-fa2e7d705d3c,Namespace:calico-system,Attempt:0,}" Aug 13 07:14:54.613381 containerd[1463]: time="2025-08-13T07:14:54.613252419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:14:54.613598 containerd[1463]: time="2025-08-13T07:14:54.613502188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:14:54.613694 containerd[1463]: time="2025-08-13T07:14:54.613598558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:54.614250 containerd[1463]: time="2025-08-13T07:14:54.613982568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:54.618090 kubelet[2544]: E0813 07:14:54.618060 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.618504 kubelet[2544]: W0813 07:14:54.618255 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.618504 kubelet[2544]: E0813 07:14:54.618296 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.621018 kubelet[2544]: E0813 07:14:54.619130 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.621018 kubelet[2544]: W0813 07:14:54.619186 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.621018 kubelet[2544]: E0813 07:14:54.619233 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.621018 kubelet[2544]: E0813 07:14:54.620040 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.621018 kubelet[2544]: W0813 07:14:54.620060 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.621018 kubelet[2544]: E0813 07:14:54.620252 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.621384 kubelet[2544]: E0813 07:14:54.621088 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.621384 kubelet[2544]: W0813 07:14:54.621155 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.621384 kubelet[2544]: E0813 07:14:54.621185 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.621852 kubelet[2544]: E0813 07:14:54.621828 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.621852 kubelet[2544]: W0813 07:14:54.621851 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.622000 kubelet[2544]: E0813 07:14:54.621875 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.622458 kubelet[2544]: E0813 07:14:54.622432 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.622613 kubelet[2544]: W0813 07:14:54.622456 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.622677 kubelet[2544]: E0813 07:14:54.622663 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.622975 kubelet[2544]: E0813 07:14:54.622951 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.622975 kubelet[2544]: W0813 07:14:54.622974 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.623231 kubelet[2544]: E0813 07:14:54.623205 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.623740 kubelet[2544]: E0813 07:14:54.623705 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.623740 kubelet[2544]: W0813 07:14:54.623734 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.623899 kubelet[2544]: E0813 07:14:54.623832 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.624402 kubelet[2544]: E0813 07:14:54.624378 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.624525 kubelet[2544]: W0813 07:14:54.624402 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.624722 kubelet[2544]: E0813 07:14:54.624695 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.625049 kubelet[2544]: E0813 07:14:54.625029 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.625049 kubelet[2544]: W0813 07:14:54.625049 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.625477 kubelet[2544]: E0813 07:14:54.625423 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.627222 kubelet[2544]: E0813 07:14:54.627198 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.627222 kubelet[2544]: W0813 07:14:54.627222 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.627405 kubelet[2544]: E0813 07:14:54.627383 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.627893 kubelet[2544]: E0813 07:14:54.627867 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.627893 kubelet[2544]: W0813 07:14:54.627894 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.628081 kubelet[2544]: E0813 07:14:54.628047 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.629489 kubelet[2544]: E0813 07:14:54.629179 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.629489 kubelet[2544]: W0813 07:14:54.629200 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.629640 kubelet[2544]: E0813 07:14:54.629601 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.630448 kubelet[2544]: E0813 07:14:54.630404 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.630873 kubelet[2544]: W0813 07:14:54.630527 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.630873 kubelet[2544]: E0813 07:14:54.630832 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.631363 kubelet[2544]: E0813 07:14:54.631195 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.631363 kubelet[2544]: W0813 07:14:54.631215 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.631363 kubelet[2544]: E0813 07:14:54.631359 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.631898 kubelet[2544]: E0813 07:14:54.631861 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.631898 kubelet[2544]: W0813 07:14:54.631877 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.632334 kubelet[2544]: E0813 07:14:54.631963 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.632630 kubelet[2544]: E0813 07:14:54.632519 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.632630 kubelet[2544]: W0813 07:14:54.632538 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.632630 kubelet[2544]: E0813 07:14:54.632622 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.633011 kubelet[2544]: E0813 07:14:54.632914 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.633011 kubelet[2544]: W0813 07:14:54.632933 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.633399 kubelet[2544]: E0813 07:14:54.633063 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.633399 kubelet[2544]: E0813 07:14:54.633283 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.633399 kubelet[2544]: W0813 07:14:54.633298 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.633602 kubelet[2544]: E0813 07:14:54.633405 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.634020 kubelet[2544]: E0813 07:14:54.633761 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.634020 kubelet[2544]: W0813 07:14:54.633779 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.634020 kubelet[2544]: E0813 07:14:54.633803 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.634238 kubelet[2544]: E0813 07:14:54.634227 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.634298 kubelet[2544]: W0813 07:14:54.634243 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.634298 kubelet[2544]: E0813 07:14:54.634267 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.634936 kubelet[2544]: E0813 07:14:54.634694 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.634936 kubelet[2544]: W0813 07:14:54.634715 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.634936 kubelet[2544]: E0813 07:14:54.634808 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.637111 kubelet[2544]: E0813 07:14:54.636366 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.637111 kubelet[2544]: W0813 07:14:54.636385 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.637111 kubelet[2544]: E0813 07:14:54.636891 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.638648 kubelet[2544]: E0813 07:14:54.638622 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.643488 kubelet[2544]: W0813 07:14:54.638647 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.643488 kubelet[2544]: E0813 07:14:54.639084 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.643488 kubelet[2544]: E0813 07:14:54.641774 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.643488 kubelet[2544]: W0813 07:14:54.641793 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.643488 kubelet[2544]: E0813 07:14:54.641811 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.673528 kubelet[2544]: E0813 07:14:54.673440 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:54.673797 kubelet[2544]: W0813 07:14:54.673767 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:54.674032 kubelet[2544]: E0813 07:14:54.674010 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:54.678268 systemd[1]: Started cri-containerd-da53f119709f78b36e98da8860795e1ca8f9f40d87d4025c35212c5b8890cd0b.scope - libcontainer container da53f119709f78b36e98da8860795e1ca8f9f40d87d4025c35212c5b8890cd0b. Aug 13 07:14:54.779245 containerd[1463]: time="2025-08-13T07:14:54.778976422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2fglh,Uid:d483117c-aada-4cb0-94ca-fa2e7d705d3c,Namespace:calico-system,Attempt:0,} returns sandbox id \"da53f119709f78b36e98da8860795e1ca8f9f40d87d4025c35212c5b8890cd0b\"" Aug 13 07:14:55.405257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730094258.mount: Deactivated successfully. Aug 13 07:14:56.459098 kubelet[2544]: E0813 07:14:56.458630 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9swvw" podUID="707e0f61-36a7-4b17-96a3-b98a0613f9bd" Aug 13 07:14:56.590736 containerd[1463]: time="2025-08-13T07:14:56.590676214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:56.592702 containerd[1463]: time="2025-08-13T07:14:56.592623669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:14:56.594211 containerd[1463]: time="2025-08-13T07:14:56.594164291Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:56.597396 containerd[1463]: time="2025-08-13T07:14:56.597320583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:56.598605 containerd[1463]: time="2025-08-13T07:14:56.598260960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.194820349s" Aug 13 07:14:56.598605 containerd[1463]: time="2025-08-13T07:14:56.598309761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:14:56.600650 containerd[1463]: time="2025-08-13T07:14:56.600262260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:14:56.622584 containerd[1463]: time="2025-08-13T07:14:56.622504119Z" level=info msg="CreateContainer within sandbox \"d975976e988b2fcffe69df2cf7cc64aaf495050ef8aa1d6a3df56550d476cea6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:14:56.651460 containerd[1463]: time="2025-08-13T07:14:56.646705942Z" level=info msg="CreateContainer within sandbox \"d975976e988b2fcffe69df2cf7cc64aaf495050ef8aa1d6a3df56550d476cea6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d0d600ad71485ec8c661b1bc30e09d64f6ebb106469deb00e83ef06723151f2a\"" Aug 13 07:14:56.652340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3877915075.mount: Deactivated successfully. Aug 13 07:14:56.655992 containerd[1463]: time="2025-08-13T07:14:56.654282335Z" level=info msg="StartContainer for \"d0d600ad71485ec8c661b1bc30e09d64f6ebb106469deb00e83ef06723151f2a\"" Aug 13 07:14:56.709687 systemd[1]: Started cri-containerd-d0d600ad71485ec8c661b1bc30e09d64f6ebb106469deb00e83ef06723151f2a.scope - libcontainer container d0d600ad71485ec8c661b1bc30e09d64f6ebb106469deb00e83ef06723151f2a. Aug 13 07:14:56.771343 containerd[1463]: time="2025-08-13T07:14:56.771265143Z" level=info msg="StartContainer for \"d0d600ad71485ec8c661b1bc30e09d64f6ebb106469deb00e83ef06723151f2a\" returns successfully" Aug 13 07:14:57.588671 kubelet[2544]: I0813 07:14:57.584828 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79f955bcf7-l9h5c" podStartSLOduration=2.387273928 podStartE2EDuration="4.584804715s" podCreationTimestamp="2025-08-13 07:14:53 +0000 UTC" firstStartedPulling="2025-08-13 07:14:54.402280251 +0000 UTC m=+22.092901460" lastFinishedPulling="2025-08-13 07:14:56.599811027 +0000 UTC m=+24.290432247" observedRunningTime="2025-08-13 07:14:57.583737784 +0000 UTC m=+25.274359015" watchObservedRunningTime="2025-08-13 07:14:57.584804715 +0000 UTC m=+25.275425953" Aug 13 07:14:57.640587 kubelet[2544]: E0813 07:14:57.640535 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.640861 kubelet[2544]: W0813 07:14:57.640790 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.641598 kubelet[2544]: E0813 07:14:57.641504 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.643441 kubelet[2544]: E0813 07:14:57.643299 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.643441 kubelet[2544]: W0813 07:14:57.643320 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.643441 kubelet[2544]: E0813 07:14:57.643347 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.644120 kubelet[2544]: E0813 07:14:57.643988 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.644120 kubelet[2544]: W0813 07:14:57.644008 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.644120 kubelet[2544]: E0813 07:14:57.644037 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.644929 kubelet[2544]: E0813 07:14:57.644619 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.644929 kubelet[2544]: W0813 07:14:57.644638 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.644929 kubelet[2544]: E0813 07:14:57.644656 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.645609 kubelet[2544]: E0813 07:14:57.645295 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.645609 kubelet[2544]: W0813 07:14:57.645314 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.645609 kubelet[2544]: E0813 07:14:57.645331 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.646046 kubelet[2544]: E0813 07:14:57.645885 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.646046 kubelet[2544]: W0813 07:14:57.645904 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.646046 kubelet[2544]: E0813 07:14:57.645921 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.646788 kubelet[2544]: E0813 07:14:57.646457 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.646788 kubelet[2544]: W0813 07:14:57.646499 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.646788 kubelet[2544]: E0813 07:14:57.646519 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.647189 kubelet[2544]: E0813 07:14:57.647075 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.647189 kubelet[2544]: W0813 07:14:57.647092 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.647189 kubelet[2544]: E0813 07:14:57.647110 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.647854 kubelet[2544]: E0813 07:14:57.647671 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.647854 kubelet[2544]: W0813 07:14:57.647690 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.647854 kubelet[2544]: E0813 07:14:57.647706 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.648389 kubelet[2544]: E0813 07:14:57.648289 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.648389 kubelet[2544]: W0813 07:14:57.648306 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.648389 kubelet[2544]: E0813 07:14:57.648324 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.649087 kubelet[2544]: E0813 07:14:57.648965 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.649087 kubelet[2544]: W0813 07:14:57.648983 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.649087 kubelet[2544]: E0813 07:14:57.648999 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.649787 kubelet[2544]: E0813 07:14:57.649597 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.649787 kubelet[2544]: W0813 07:14:57.649617 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.649787 kubelet[2544]: E0813 07:14:57.649635 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.650560 kubelet[2544]: E0813 07:14:57.650372 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.650560 kubelet[2544]: W0813 07:14:57.650391 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.650560 kubelet[2544]: E0813 07:14:57.650408 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.651297 kubelet[2544]: E0813 07:14:57.651004 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.651297 kubelet[2544]: W0813 07:14:57.651034 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.651297 kubelet[2544]: E0813 07:14:57.651052 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.651724 kubelet[2544]: E0813 07:14:57.651617 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.651724 kubelet[2544]: W0813 07:14:57.651637 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.651724 kubelet[2544]: E0813 07:14:57.651653 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.657402 kubelet[2544]: E0813 07:14:57.657372 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.657402 kubelet[2544]: W0813 07:14:57.657401 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.657402 kubelet[2544]: E0813 07:14:57.657424 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.658398 kubelet[2544]: E0813 07:14:57.657908 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.658398 kubelet[2544]: W0813 07:14:57.657932 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.658398 kubelet[2544]: E0813 07:14:57.657952 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.658398 kubelet[2544]: E0813 07:14:57.658332 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.658398 kubelet[2544]: W0813 07:14:57.658345 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.658398 kubelet[2544]: E0813 07:14:57.658366 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.658843 kubelet[2544]: E0813 07:14:57.658749 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.658843 kubelet[2544]: W0813 07:14:57.658777 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.658843 kubelet[2544]: E0813 07:14:57.658812 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.659563 kubelet[2544]: E0813 07:14:57.659151 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.659563 kubelet[2544]: W0813 07:14:57.659171 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.659563 kubelet[2544]: E0813 07:14:57.659266 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.659563 kubelet[2544]: E0813 07:14:57.659551 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.659563 kubelet[2544]: W0813 07:14:57.659565 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.659882 kubelet[2544]: E0813 07:14:57.659682 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.659947 kubelet[2544]: E0813 07:14:57.659914 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.659947 kubelet[2544]: W0813 07:14:57.659927 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.660836 kubelet[2544]: E0813 07:14:57.660221 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.661546 kubelet[2544]: E0813 07:14:57.660703 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.662258 kubelet[2544]: W0813 07:14:57.661505 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.662258 kubelet[2544]: E0813 07:14:57.662075 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.662598 kubelet[2544]: E0813 07:14:57.662509 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.662598 kubelet[2544]: W0813 07:14:57.662528 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.663447 kubelet[2544]: E0813 07:14:57.662765 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.663884 kubelet[2544]: E0813 07:14:57.663863 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.664030 kubelet[2544]: W0813 07:14:57.664001 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.664413 kubelet[2544]: E0813 07:14:57.664266 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.664640 kubelet[2544]: E0813 07:14:57.664599 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.664640 kubelet[2544]: W0813 07:14:57.664618 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.665122 kubelet[2544]: E0813 07:14:57.664869 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.665324 kubelet[2544]: E0813 07:14:57.665286 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.665324 kubelet[2544]: W0813 07:14:57.665304 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.665818 kubelet[2544]: E0813 07:14:57.665568 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.666052 kubelet[2544]: E0813 07:14:57.665983 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.666052 kubelet[2544]: W0813 07:14:57.666006 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.666380 kubelet[2544]: E0813 07:14:57.666272 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.666609 kubelet[2544]: E0813 07:14:57.666592 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.666815 kubelet[2544]: W0813 07:14:57.666712 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.666815 kubelet[2544]: E0813 07:14:57.666756 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.667459 kubelet[2544]: E0813 07:14:57.667301 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.667459 kubelet[2544]: W0813 07:14:57.667319 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.667459 kubelet[2544]: E0813 07:14:57.667341 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.668588 kubelet[2544]: E0813 07:14:57.668305 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.668588 kubelet[2544]: W0813 07:14:57.668323 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.668588 kubelet[2544]: E0813 07:14:57.668344 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.669703 kubelet[2544]: E0813 07:14:57.669450 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.669703 kubelet[2544]: W0813 07:14:57.669525 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.669703 kubelet[2544]: E0813 07:14:57.669549 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.670482 kubelet[2544]: E0813 07:14:57.670398 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:14:57.670482 kubelet[2544]: W0813 07:14:57.670419 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:14:57.670482 kubelet[2544]: E0813 07:14:57.670437 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:14:57.767483 containerd[1463]: time="2025-08-13T07:14:57.767410346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:57.768958 containerd[1463]: time="2025-08-13T07:14:57.768861102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:14:57.770529 containerd[1463]: time="2025-08-13T07:14:57.770273648Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:57.774519 containerd[1463]: time="2025-08-13T07:14:57.773404179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:57.775289 containerd[1463]: time="2025-08-13T07:14:57.775245219Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.174942519s" Aug 13 07:14:57.775456 containerd[1463]: time="2025-08-13T07:14:57.775432833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:14:57.780912 containerd[1463]: time="2025-08-13T07:14:57.780855378Z" level=info msg="CreateContainer within sandbox \"da53f119709f78b36e98da8860795e1ca8f9f40d87d4025c35212c5b8890cd0b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:14:57.804814 containerd[1463]: time="2025-08-13T07:14:57.804752752Z" level=info msg="CreateContainer within sandbox \"da53f119709f78b36e98da8860795e1ca8f9f40d87d4025c35212c5b8890cd0b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"10c68564b2ee61d091aca4502af6544dd91b303c80f444de4f14f9e626271110\"" Aug 13 07:14:57.806562 containerd[1463]: time="2025-08-13T07:14:57.805684739Z" level=info msg="StartContainer for \"10c68564b2ee61d091aca4502af6544dd91b303c80f444de4f14f9e626271110\"" Aug 13 07:14:57.863728 systemd[1]: Started cri-containerd-10c68564b2ee61d091aca4502af6544dd91b303c80f444de4f14f9e626271110.scope - libcontainer container 10c68564b2ee61d091aca4502af6544dd91b303c80f444de4f14f9e626271110. Aug 13 07:14:57.914845 containerd[1463]: time="2025-08-13T07:14:57.914786839Z" level=info msg="StartContainer for \"10c68564b2ee61d091aca4502af6544dd91b303c80f444de4f14f9e626271110\" returns successfully" Aug 13 07:14:57.933805 systemd[1]: cri-containerd-10c68564b2ee61d091aca4502af6544dd91b303c80f444de4f14f9e626271110.scope: Deactivated successfully. Aug 13 07:14:58.457076 kubelet[2544]: E0813 07:14:58.455558 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9swvw" podUID="707e0f61-36a7-4b17-96a3-b98a0613f9bd" Aug 13 07:14:58.575365 kubelet[2544]: I0813 07:14:58.575325 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:14:58.607962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10c68564b2ee61d091aca4502af6544dd91b303c80f444de4f14f9e626271110-rootfs.mount: Deactivated successfully. Aug 13 07:14:58.673090 containerd[1463]: time="2025-08-13T07:14:58.672644450Z" level=info msg="shim disconnected" id=10c68564b2ee61d091aca4502af6544dd91b303c80f444de4f14f9e626271110 namespace=k8s.io Aug 13 07:14:58.673090 containerd[1463]: time="2025-08-13T07:14:58.672770001Z" level=warning msg="cleaning up after shim disconnected" id=10c68564b2ee61d091aca4502af6544dd91b303c80f444de4f14f9e626271110 namespace=k8s.io Aug 13 07:14:58.673090 containerd[1463]: time="2025-08-13T07:14:58.672786671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:14:59.581769 containerd[1463]: time="2025-08-13T07:14:59.581690291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:15:00.456224 kubelet[2544]: E0813 07:15:00.455768 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9swvw" podUID="707e0f61-36a7-4b17-96a3-b98a0613f9bd" Aug 13 07:15:02.457041 kubelet[2544]: E0813 07:15:02.456361 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9swvw" podUID="707e0f61-36a7-4b17-96a3-b98a0613f9bd" Aug 13 07:15:02.901100 containerd[1463]: time="2025-08-13T07:15:02.901026173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:02.902545 containerd[1463]: time="2025-08-13T07:15:02.902450177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:15:02.903998 containerd[1463]: time="2025-08-13T07:15:02.903956602Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:02.909286 containerd[1463]: time="2025-08-13T07:15:02.909189321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:02.910649 containerd[1463]: time="2025-08-13T07:15:02.910461038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.328721426s" Aug 13 07:15:02.910649 containerd[1463]: time="2025-08-13T07:15:02.910532532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:15:02.914154 containerd[1463]: time="2025-08-13T07:15:02.914101853Z" level=info msg="CreateContainer within sandbox \"da53f119709f78b36e98da8860795e1ca8f9f40d87d4025c35212c5b8890cd0b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:15:02.935731 containerd[1463]: time="2025-08-13T07:15:02.935675331Z" level=info msg="CreateContainer within sandbox \"da53f119709f78b36e98da8860795e1ca8f9f40d87d4025c35212c5b8890cd0b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"139174d05b8bf1a50664915d90ef0051cedfc1bfbe6a54f1472f979ab86ac198\"" Aug 13 07:15:02.937009 containerd[1463]: time="2025-08-13T07:15:02.936895427Z" level=info msg="StartContainer for \"139174d05b8bf1a50664915d90ef0051cedfc1bfbe6a54f1472f979ab86ac198\"" Aug 13 07:15:02.990736 systemd[1]: Started cri-containerd-139174d05b8bf1a50664915d90ef0051cedfc1bfbe6a54f1472f979ab86ac198.scope - libcontainer container 139174d05b8bf1a50664915d90ef0051cedfc1bfbe6a54f1472f979ab86ac198. Aug 13 07:15:03.030780 containerd[1463]: time="2025-08-13T07:15:03.030726725Z" level=info msg="StartContainer for \"139174d05b8bf1a50664915d90ef0051cedfc1bfbe6a54f1472f979ab86ac198\" returns successfully" Aug 13 07:15:04.039425 containerd[1463]: time="2025-08-13T07:15:04.039349365Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:15:04.042286 systemd[1]: cri-containerd-139174d05b8bf1a50664915d90ef0051cedfc1bfbe6a54f1472f979ab86ac198.scope: Deactivated successfully. Aug 13 07:15:04.059295 kubelet[2544]: I0813 07:15:04.059258 2544 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 07:15:04.098275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-139174d05b8bf1a50664915d90ef0051cedfc1bfbe6a54f1472f979ab86ac198-rootfs.mount: Deactivated successfully. Aug 13 07:15:04.136739 systemd[1]: Created slice kubepods-burstable-pod392a02f5_e055_4b6b_9098_a7e0976f9e04.slice - libcontainer container kubepods-burstable-pod392a02f5_e055_4b6b_9098_a7e0976f9e04.slice. Aug 13 07:15:04.163489 systemd[1]: Created slice kubepods-burstable-pod227e33f7_b859_4c0c_b764_8eade43b88be.slice - libcontainer container kubepods-burstable-pod227e33f7_b859_4c0c_b764_8eade43b88be.slice. Aug 13 07:15:04.170154 kubelet[2544]: W0813 07:15:04.170078 2544 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' and this object Aug 13 07:15:04.170429 kubelet[2544]: E0813 07:15:04.170155 2544 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 13 07:15:04.170429 kubelet[2544]: W0813 07:15:04.170271 2544 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' and this object Aug 13 07:15:04.170429 kubelet[2544]: E0813 07:15:04.170296 2544 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 13 07:15:04.170429 kubelet[2544]: W0813 07:15:04.170360 2544 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' and this object Aug 13 07:15:04.172996 kubelet[2544]: E0813 07:15:04.170377 2544 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 13 07:15:04.172996 kubelet[2544]: W0813 07:15:04.170430 2544 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' and this object Aug 13 07:15:04.172996 kubelet[2544]: E0813 07:15:04.170447 2544 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 13 07:15:04.173656 kubelet[2544]: W0813 07:15:04.173561 2544 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' and this object Aug 13 07:15:04.173656 kubelet[2544]: E0813 07:15:04.173599 2544 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 13 07:15:04.181907 systemd[1]: Created slice kubepods-besteffort-podec5447bc_9529_4230_8377_9fbb134088cb.slice - libcontainer container kubepods-besteffort-podec5447bc_9529_4230_8377_9fbb134088cb.slice. Aug 13 07:15:04.198444 systemd[1]: Created slice kubepods-besteffort-podd3025ee2_b9db_46e5_ad3b_6d08aebd9c73.slice - libcontainer container kubepods-besteffort-podd3025ee2_b9db_46e5_ad3b_6d08aebd9c73.slice. Aug 13 07:15:04.214137 systemd[1]: Created slice kubepods-besteffort-pod2016744c_fc88_45ab_9b7e_62d31ebc5f72.slice - libcontainer container kubepods-besteffort-pod2016744c_fc88_45ab_9b7e_62d31ebc5f72.slice. Aug 13 07:15:04.231783 systemd[1]: Created slice kubepods-besteffort-podd9138eb6_4b16_4ca5_add5_a7780c9f57fc.slice - libcontainer container kubepods-besteffort-podd9138eb6_4b16_4ca5_add5_a7780c9f57fc.slice. Aug 13 07:15:04.240019 systemd[1]: Created slice kubepods-besteffort-pod37af0644_e76e_4019_8384_f9b67a1e33ea.slice - libcontainer container kubepods-besteffort-pod37af0644_e76e_4019_8384_f9b67a1e33ea.slice. Aug 13 07:15:04.329085 kubelet[2544]: I0813 07:15:04.303742 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2016744c-fc88-45ab-9b7e-62d31ebc5f72-config\") pod \"goldmane-58fd7646b9-kc2rx\" (UID: \"2016744c-fc88-45ab-9b7e-62d31ebc5f72\") " pod="calico-system/goldmane-58fd7646b9-kc2rx" Aug 13 07:15:04.329085 kubelet[2544]: I0813 07:15:04.303809 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/227e33f7-b859-4c0c-b764-8eade43b88be-config-volume\") pod \"coredns-7c65d6cfc9-qt8wq\" (UID: \"227e33f7-b859-4c0c-b764-8eade43b88be\") " pod="kube-system/coredns-7c65d6cfc9-qt8wq" Aug 13 07:15:04.329085 kubelet[2544]: I0813 07:15:04.303845 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec5447bc-9529-4230-8377-9fbb134088cb-tigera-ca-bundle\") pod \"calico-kube-controllers-d84b9fbf7-fwlqp\" (UID: \"ec5447bc-9529-4230-8377-9fbb134088cb\") " pod="calico-system/calico-kube-controllers-d84b9fbf7-fwlqp" Aug 13 07:15:04.329085 kubelet[2544]: I0813 07:15:04.303880 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxf4k\" (UniqueName: \"kubernetes.io/projected/392a02f5-e055-4b6b-9098-a7e0976f9e04-kube-api-access-vxf4k\") pod \"coredns-7c65d6cfc9-sbmzm\" (UID: \"392a02f5-e055-4b6b-9098-a7e0976f9e04\") " pod="kube-system/coredns-7c65d6cfc9-sbmzm" Aug 13 07:15:04.329085 kubelet[2544]: I0813 07:15:04.303907 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdrqv\" (UniqueName: \"kubernetes.io/projected/227e33f7-b859-4c0c-b764-8eade43b88be-kube-api-access-gdrqv\") pod \"coredns-7c65d6cfc9-qt8wq\" (UID: \"227e33f7-b859-4c0c-b764-8eade43b88be\") " pod="kube-system/coredns-7c65d6cfc9-qt8wq" Aug 13 07:15:04.332829 kubelet[2544]: I0813 07:15:04.303957 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/37af0644-e76e-4019-8384-f9b67a1e33ea-whisker-backend-key-pair\") pod \"whisker-77d755b4b8-pfvbp\" (UID: \"37af0644-e76e-4019-8384-f9b67a1e33ea\") " pod="calico-system/whisker-77d755b4b8-pfvbp" Aug 13 07:15:04.332829 kubelet[2544]: I0813 07:15:04.303988 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d9138eb6-4b16-4ca5-add5-a7780c9f57fc-calico-apiserver-certs\") pod \"calico-apiserver-694b6c886d-bxdxg\" (UID: \"d9138eb6-4b16-4ca5-add5-a7780c9f57fc\") " pod="calico-apiserver/calico-apiserver-694b6c886d-bxdxg" Aug 13 07:15:04.332829 kubelet[2544]: I0813 07:15:04.304020 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flk8s\" (UniqueName: \"kubernetes.io/projected/2016744c-fc88-45ab-9b7e-62d31ebc5f72-kube-api-access-flk8s\") pod \"goldmane-58fd7646b9-kc2rx\" (UID: \"2016744c-fc88-45ab-9b7e-62d31ebc5f72\") " pod="calico-system/goldmane-58fd7646b9-kc2rx" Aug 13 07:15:04.332829 kubelet[2544]: I0813 07:15:04.304049 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37af0644-e76e-4019-8384-f9b67a1e33ea-whisker-ca-bundle\") pod \"whisker-77d755b4b8-pfvbp\" (UID: \"37af0644-e76e-4019-8384-f9b67a1e33ea\") " pod="calico-system/whisker-77d755b4b8-pfvbp" Aug 13 07:15:04.332829 kubelet[2544]: I0813 07:15:04.304077 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pss28\" (UniqueName: \"kubernetes.io/projected/d3025ee2-b9db-46e5-ad3b-6d08aebd9c73-kube-api-access-pss28\") pod \"calico-apiserver-694b6c886d-h85bl\" (UID: \"d3025ee2-b9db-46e5-ad3b-6d08aebd9c73\") " pod="calico-apiserver/calico-apiserver-694b6c886d-h85bl" Aug 13 07:15:04.333134 kubelet[2544]: I0813 07:15:04.304124 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/392a02f5-e055-4b6b-9098-a7e0976f9e04-config-volume\") pod \"coredns-7c65d6cfc9-sbmzm\" (UID: \"392a02f5-e055-4b6b-9098-a7e0976f9e04\") " pod="kube-system/coredns-7c65d6cfc9-sbmzm" Aug 13 07:15:04.333134 kubelet[2544]: I0813 07:15:04.304154 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d3025ee2-b9db-46e5-ad3b-6d08aebd9c73-calico-apiserver-certs\") pod \"calico-apiserver-694b6c886d-h85bl\" (UID: \"d3025ee2-b9db-46e5-ad3b-6d08aebd9c73\") " pod="calico-apiserver/calico-apiserver-694b6c886d-h85bl" Aug 13 07:15:04.333134 kubelet[2544]: I0813 07:15:04.304199 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d75pd\" (UniqueName: \"kubernetes.io/projected/37af0644-e76e-4019-8384-f9b67a1e33ea-kube-api-access-d75pd\") pod \"whisker-77d755b4b8-pfvbp\" (UID: \"37af0644-e76e-4019-8384-f9b67a1e33ea\") " pod="calico-system/whisker-77d755b4b8-pfvbp" Aug 13 07:15:04.333134 kubelet[2544]: I0813 07:15:04.304230 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq66l\" (UniqueName: \"kubernetes.io/projected/ec5447bc-9529-4230-8377-9fbb134088cb-kube-api-access-pq66l\") pod \"calico-kube-controllers-d84b9fbf7-fwlqp\" (UID: \"ec5447bc-9529-4230-8377-9fbb134088cb\") " pod="calico-system/calico-kube-controllers-d84b9fbf7-fwlqp" Aug 13 07:15:04.333134 kubelet[2544]: I0813 07:15:04.304265 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2016744c-fc88-45ab-9b7e-62d31ebc5f72-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-kc2rx\" (UID: \"2016744c-fc88-45ab-9b7e-62d31ebc5f72\") " pod="calico-system/goldmane-58fd7646b9-kc2rx" Aug 13 07:15:04.333531 kubelet[2544]: I0813 07:15:04.304292 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vk6p\" (UniqueName: \"kubernetes.io/projected/d9138eb6-4b16-4ca5-add5-a7780c9f57fc-kube-api-access-6vk6p\") pod \"calico-apiserver-694b6c886d-bxdxg\" (UID: \"d9138eb6-4b16-4ca5-add5-a7780c9f57fc\") " pod="calico-apiserver/calico-apiserver-694b6c886d-bxdxg" Aug 13 07:15:04.333531 kubelet[2544]: I0813 07:15:04.304327 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2016744c-fc88-45ab-9b7e-62d31ebc5f72-goldmane-key-pair\") pod \"goldmane-58fd7646b9-kc2rx\" (UID: \"2016744c-fc88-45ab-9b7e-62d31ebc5f72\") " pod="calico-system/goldmane-58fd7646b9-kc2rx" Aug 13 07:15:04.475544 containerd[1463]: time="2025-08-13T07:15:04.474337620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qt8wq,Uid:227e33f7-b859-4c0c-b764-8eade43b88be,Namespace:kube-system,Attempt:0,}" Aug 13 07:15:04.499273 systemd[1]: Created slice kubepods-besteffort-pod707e0f61_36a7_4b17_96a3_b98a0613f9bd.slice - libcontainer container kubepods-besteffort-pod707e0f61_36a7_4b17_96a3_b98a0613f9bd.slice. Aug 13 07:15:04.505510 containerd[1463]: time="2025-08-13T07:15:04.505123130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694b6c886d-h85bl,Uid:d3025ee2-b9db-46e5-ad3b-6d08aebd9c73,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:15:04.507793 containerd[1463]: time="2025-08-13T07:15:04.507229380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9swvw,Uid:707e0f61-36a7-4b17-96a3-b98a0613f9bd,Namespace:calico-system,Attempt:0,}" Aug 13 07:15:04.538213 containerd[1463]: time="2025-08-13T07:15:04.538148763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694b6c886d-bxdxg,Uid:d9138eb6-4b16-4ca5-add5-a7780c9f57fc,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:15:04.653239 containerd[1463]: time="2025-08-13T07:15:04.652337242Z" level=info msg="shim disconnected" id=139174d05b8bf1a50664915d90ef0051cedfc1bfbe6a54f1472f979ab86ac198 namespace=k8s.io Aug 13 07:15:04.653239 containerd[1463]: time="2025-08-13T07:15:04.652404843Z" level=warning msg="cleaning up after shim disconnected" id=139174d05b8bf1a50664915d90ef0051cedfc1bfbe6a54f1472f979ab86ac198 namespace=k8s.io Aug 13 07:15:04.653239 containerd[1463]: time="2025-08-13T07:15:04.652458399Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:15:04.682905 containerd[1463]: time="2025-08-13T07:15:04.682849489Z" level=warning msg="cleanup warnings time=\"2025-08-13T07:15:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 07:15:04.761006 containerd[1463]: time="2025-08-13T07:15:04.760840378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sbmzm,Uid:392a02f5-e055-4b6b-9098-a7e0976f9e04,Namespace:kube-system,Attempt:0,}" Aug 13 07:15:04.799862 containerd[1463]: time="2025-08-13T07:15:04.799317064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d84b9fbf7-fwlqp,Uid:ec5447bc-9529-4230-8377-9fbb134088cb,Namespace:calico-system,Attempt:0,}" Aug 13 07:15:04.940312 containerd[1463]: time="2025-08-13T07:15:04.940077962Z" level=error msg="Failed to destroy network for sandbox \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:04.943393 containerd[1463]: time="2025-08-13T07:15:04.942634144Z" level=error msg="encountered an error cleaning up failed sandbox \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:04.944246 containerd[1463]: time="2025-08-13T07:15:04.944196731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694b6c886d-bxdxg,Uid:d9138eb6-4b16-4ca5-add5-a7780c9f57fc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:04.945148 kubelet[2544]: E0813 07:15:04.945020 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:04.945633 kubelet[2544]: E0813 07:15:04.945173 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694b6c886d-bxdxg" Aug 13 07:15:04.945633 kubelet[2544]: E0813 07:15:04.945212 2544 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694b6c886d-bxdxg" Aug 13 07:15:04.945633 kubelet[2544]: E0813 07:15:04.945293 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-694b6c886d-bxdxg_calico-apiserver(d9138eb6-4b16-4ca5-add5-a7780c9f57fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-694b6c886d-bxdxg_calico-apiserver(d9138eb6-4b16-4ca5-add5-a7780c9f57fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-694b6c886d-bxdxg" podUID="d9138eb6-4b16-4ca5-add5-a7780c9f57fc" Aug 13 07:15:05.005137 containerd[1463]: time="2025-08-13T07:15:05.005071967Z" level=error msg="Failed to destroy network for sandbox \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.006238 containerd[1463]: time="2025-08-13T07:15:05.005810394Z" level=error msg="encountered an error cleaning up failed sandbox \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.006238 containerd[1463]: time="2025-08-13T07:15:05.005906519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qt8wq,Uid:227e33f7-b859-4c0c-b764-8eade43b88be,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.006238 containerd[1463]: time="2025-08-13T07:15:05.006099123Z" level=error msg="Failed to destroy network for sandbox \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.007035 containerd[1463]: time="2025-08-13T07:15:05.006982899Z" level=error msg="encountered an error cleaning up failed sandbox \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.007556 containerd[1463]: time="2025-08-13T07:15:05.007514320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694b6c886d-h85bl,Uid:d3025ee2-b9db-46e5-ad3b-6d08aebd9c73,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.007960 kubelet[2544]: E0813 07:15:05.007902 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.008133 kubelet[2544]: E0813 07:15:05.007998 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qt8wq" Aug 13 07:15:05.008133 kubelet[2544]: E0813 07:15:05.008030 2544 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qt8wq" Aug 13 07:15:05.008133 kubelet[2544]: E0813 07:15:05.008094 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-qt8wq_kube-system(227e33f7-b859-4c0c-b764-8eade43b88be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-qt8wq_kube-system(227e33f7-b859-4c0c-b764-8eade43b88be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qt8wq" podUID="227e33f7-b859-4c0c-b764-8eade43b88be" Aug 13 07:15:05.011489 kubelet[2544]: E0813 07:15:05.010628 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.011489 kubelet[2544]: E0813 07:15:05.010702 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694b6c886d-h85bl" Aug 13 07:15:05.011489 kubelet[2544]: E0813 07:15:05.010742 2544 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694b6c886d-h85bl" Aug 13 07:15:05.011740 kubelet[2544]: E0813 07:15:05.010801 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-694b6c886d-h85bl_calico-apiserver(d3025ee2-b9db-46e5-ad3b-6d08aebd9c73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-694b6c886d-h85bl_calico-apiserver(d3025ee2-b9db-46e5-ad3b-6d08aebd9c73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-694b6c886d-h85bl" podUID="d3025ee2-b9db-46e5-ad3b-6d08aebd9c73" Aug 13 07:15:05.014449 containerd[1463]: time="2025-08-13T07:15:05.014395343Z" level=error msg="Failed to destroy network for sandbox \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.015220 containerd[1463]: time="2025-08-13T07:15:05.015172685Z" level=error msg="encountered an error cleaning up failed sandbox \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.015332 containerd[1463]: time="2025-08-13T07:15:05.015254204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9swvw,Uid:707e0f61-36a7-4b17-96a3-b98a0613f9bd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.016987 kubelet[2544]: E0813 07:15:05.016935 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.017115 kubelet[2544]: E0813 07:15:05.017005 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9swvw" Aug 13 07:15:05.017115 kubelet[2544]: E0813 07:15:05.017035 2544 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9swvw" Aug 13 07:15:05.017115 kubelet[2544]: E0813 07:15:05.017090 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9swvw_calico-system(707e0f61-36a7-4b17-96a3-b98a0613f9bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9swvw_calico-system(707e0f61-36a7-4b17-96a3-b98a0613f9bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9swvw" podUID="707e0f61-36a7-4b17-96a3-b98a0613f9bd" Aug 13 07:15:05.050875 containerd[1463]: time="2025-08-13T07:15:05.050783733Z" level=error msg="Failed to destroy network for sandbox \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.051966 containerd[1463]: time="2025-08-13T07:15:05.051555962Z" level=error msg="encountered an error cleaning up failed sandbox \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.051966 containerd[1463]: time="2025-08-13T07:15:05.051664945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sbmzm,Uid:392a02f5-e055-4b6b-9098-a7e0976f9e04,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.052589 kubelet[2544]: E0813 07:15:05.051993 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.052589 kubelet[2544]: E0813 07:15:05.052192 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-sbmzm" Aug 13 07:15:05.053068 kubelet[2544]: E0813 07:15:05.052959 2544 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-sbmzm" Aug 13 07:15:05.054323 kubelet[2544]: E0813 07:15:05.053433 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-sbmzm_kube-system(392a02f5-e055-4b6b-9098-a7e0976f9e04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-sbmzm_kube-system(392a02f5-e055-4b6b-9098-a7e0976f9e04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-sbmzm" podUID="392a02f5-e055-4b6b-9098-a7e0976f9e04" Aug 13 07:15:05.055728 containerd[1463]: time="2025-08-13T07:15:05.055586942Z" level=error msg="Failed to destroy network for sandbox \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.056386 containerd[1463]: time="2025-08-13T07:15:05.056336020Z" level=error msg="encountered an error cleaning up failed sandbox \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.056502 containerd[1463]: time="2025-08-13T07:15:05.056431667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d84b9fbf7-fwlqp,Uid:ec5447bc-9529-4230-8377-9fbb134088cb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.056862 kubelet[2544]: E0813 07:15:05.056799 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.056969 kubelet[2544]: E0813 07:15:05.056896 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d84b9fbf7-fwlqp" Aug 13 07:15:05.057047 kubelet[2544]: E0813 07:15:05.056960 2544 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d84b9fbf7-fwlqp" Aug 13 07:15:05.057109 kubelet[2544]: E0813 07:15:05.057048 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d84b9fbf7-fwlqp_calico-system(ec5447bc-9529-4230-8377-9fbb134088cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d84b9fbf7-fwlqp_calico-system(ec5447bc-9529-4230-8377-9fbb134088cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d84b9fbf7-fwlqp" podUID="ec5447bc-9529-4230-8377-9fbb134088cb" Aug 13 07:15:05.406387 kubelet[2544]: E0813 07:15:05.406323 2544 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Aug 13 07:15:05.407207 kubelet[2544]: E0813 07:15:05.406449 2544 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2016744c-fc88-45ab-9b7e-62d31ebc5f72-goldmane-key-pair podName:2016744c-fc88-45ab-9b7e-62d31ebc5f72 nodeName:}" failed. No retries permitted until 2025-08-13 07:15:05.906421999 +0000 UTC m=+33.597043217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/2016744c-fc88-45ab-9b7e-62d31ebc5f72-goldmane-key-pair") pod "goldmane-58fd7646b9-kc2rx" (UID: "2016744c-fc88-45ab-9b7e-62d31ebc5f72") : failed to sync secret cache: timed out waiting for the condition Aug 13 07:15:05.408550 kubelet[2544]: E0813 07:15:05.408514 2544 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Aug 13 07:15:05.408756 kubelet[2544]: E0813 07:15:05.408604 2544 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37af0644-e76e-4019-8384-f9b67a1e33ea-whisker-backend-key-pair podName:37af0644-e76e-4019-8384-f9b67a1e33ea nodeName:}" failed. No retries permitted until 2025-08-13 07:15:05.908581917 +0000 UTC m=+33.599203125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/37af0644-e76e-4019-8384-f9b67a1e33ea-whisker-backend-key-pair") pod "whisker-77d755b4b8-pfvbp" (UID: "37af0644-e76e-4019-8384-f9b67a1e33ea") : failed to sync secret cache: timed out waiting for the condition Aug 13 07:15:05.601181 containerd[1463]: time="2025-08-13T07:15:05.600824017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:15:05.603052 kubelet[2544]: I0813 07:15:05.602991 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Aug 13 07:15:05.604495 containerd[1463]: time="2025-08-13T07:15:05.604263819Z" level=info msg="StopPodSandbox for \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\"" Aug 13 07:15:05.605132 containerd[1463]: time="2025-08-13T07:15:05.605028602Z" level=info msg="Ensure that sandbox f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100 in task-service has been cleanup successfully" Aug 13 07:15:05.607821 kubelet[2544]: I0813 07:15:05.606887 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Aug 13 07:15:05.608035 containerd[1463]: time="2025-08-13T07:15:05.607745993Z" level=info msg="StopPodSandbox for \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\"" Aug 13 07:15:05.608035 containerd[1463]: time="2025-08-13T07:15:05.607966231Z" level=info msg="Ensure that sandbox e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a in task-service has been cleanup successfully" Aug 13 07:15:05.610746 kubelet[2544]: I0813 07:15:05.610651 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Aug 13 07:15:05.611847 containerd[1463]: time="2025-08-13T07:15:05.611427573Z" level=info msg="StopPodSandbox for \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\"" Aug 13 07:15:05.611847 containerd[1463]: time="2025-08-13T07:15:05.611708962Z" level=info msg="Ensure that sandbox d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d in task-service has been cleanup successfully" Aug 13 07:15:05.617313 kubelet[2544]: I0813 07:15:05.617043 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Aug 13 07:15:05.619950 containerd[1463]: time="2025-08-13T07:15:05.619537452Z" level=info msg="StopPodSandbox for \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\"" Aug 13 07:15:05.622548 kubelet[2544]: I0813 07:15:05.622386 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Aug 13 07:15:05.624453 containerd[1463]: time="2025-08-13T07:15:05.624364064Z" level=info msg="StopPodSandbox for \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\"" Aug 13 07:15:05.626486 containerd[1463]: time="2025-08-13T07:15:05.625541230Z" level=info msg="Ensure that sandbox 320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d in task-service has been cleanup successfully" Aug 13 07:15:05.628514 containerd[1463]: time="2025-08-13T07:15:05.627808191Z" level=info msg="Ensure that sandbox a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb in task-service has been cleanup successfully" Aug 13 07:15:05.642302 kubelet[2544]: I0813 07:15:05.642180 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Aug 13 07:15:05.645193 containerd[1463]: time="2025-08-13T07:15:05.645120986Z" level=info msg="StopPodSandbox for \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\"" Aug 13 07:15:05.655360 containerd[1463]: time="2025-08-13T07:15:05.653668158Z" level=info msg="Ensure that sandbox d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4 in task-service has been cleanup successfully" Aug 13 07:15:05.764632 containerd[1463]: time="2025-08-13T07:15:05.764553020Z" level=error msg="StopPodSandbox for \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\" failed" error="failed to destroy network for sandbox \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.765658 kubelet[2544]: E0813 07:15:05.765324 2544 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Aug 13 07:15:05.765658 kubelet[2544]: E0813 07:15:05.765413 2544 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a"} Aug 13 07:15:05.765658 kubelet[2544]: E0813 07:15:05.765524 2544 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec5447bc-9529-4230-8377-9fbb134088cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:05.765658 kubelet[2544]: E0813 07:15:05.765561 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec5447bc-9529-4230-8377-9fbb134088cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d84b9fbf7-fwlqp" podUID="ec5447bc-9529-4230-8377-9fbb134088cb" Aug 13 07:15:05.782523 containerd[1463]: time="2025-08-13T07:15:05.781311568Z" level=error msg="StopPodSandbox for \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\" failed" error="failed to destroy network for sandbox \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.782523 containerd[1463]: time="2025-08-13T07:15:05.782101666Z" level=error msg="StopPodSandbox for \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\" failed" error="failed to destroy network for sandbox \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.782775 kubelet[2544]: E0813 07:15:05.781682 2544 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Aug 13 07:15:05.782775 kubelet[2544]: E0813 07:15:05.781766 2544 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d"} Aug 13 07:15:05.782775 kubelet[2544]: E0813 07:15:05.781836 2544 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3025ee2-b9db-46e5-ad3b-6d08aebd9c73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:05.782775 kubelet[2544]: E0813 07:15:05.781872 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3025ee2-b9db-46e5-ad3b-6d08aebd9c73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-694b6c886d-h85bl" podUID="d3025ee2-b9db-46e5-ad3b-6d08aebd9c73" Aug 13 07:15:05.783109 kubelet[2544]: E0813 07:15:05.782315 2544 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Aug 13 07:15:05.783109 kubelet[2544]: E0813 07:15:05.782357 2544 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100"} Aug 13 07:15:05.783109 kubelet[2544]: E0813 07:15:05.782400 2544 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"392a02f5-e055-4b6b-9098-a7e0976f9e04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:05.783109 kubelet[2544]: E0813 07:15:05.782432 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"392a02f5-e055-4b6b-9098-a7e0976f9e04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-sbmzm" podUID="392a02f5-e055-4b6b-9098-a7e0976f9e04" Aug 13 07:15:05.790431 containerd[1463]: time="2025-08-13T07:15:05.789796702Z" level=error msg="StopPodSandbox for \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\" failed" error="failed to destroy network for sandbox \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.790634 kubelet[2544]: E0813 07:15:05.790201 2544 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Aug 13 07:15:05.790634 kubelet[2544]: E0813 07:15:05.790287 2544 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d"} Aug 13 07:15:05.790634 kubelet[2544]: E0813 07:15:05.790355 2544 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9138eb6-4b16-4ca5-add5-a7780c9f57fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:05.790634 kubelet[2544]: E0813 07:15:05.790388 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9138eb6-4b16-4ca5-add5-a7780c9f57fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-694b6c886d-bxdxg" podUID="d9138eb6-4b16-4ca5-add5-a7780c9f57fc" Aug 13 07:15:05.797453 containerd[1463]: time="2025-08-13T07:15:05.796874653Z" level=error msg="StopPodSandbox for \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\" failed" error="failed to destroy network for sandbox \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.797669 kubelet[2544]: E0813 07:15:05.797202 2544 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Aug 13 07:15:05.797669 kubelet[2544]: E0813 07:15:05.797269 2544 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4"} Aug 13 07:15:05.797669 kubelet[2544]: E0813 07:15:05.797316 2544 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"227e33f7-b859-4c0c-b764-8eade43b88be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:05.797669 kubelet[2544]: E0813 07:15:05.797355 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"227e33f7-b859-4c0c-b764-8eade43b88be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qt8wq" podUID="227e33f7-b859-4c0c-b764-8eade43b88be" Aug 13 07:15:05.801127 containerd[1463]: time="2025-08-13T07:15:05.801044321Z" level=error msg="StopPodSandbox for \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\" failed" error="failed to destroy network for sandbox \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:05.801806 kubelet[2544]: E0813 07:15:05.801414 2544 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Aug 13 07:15:05.801806 kubelet[2544]: E0813 07:15:05.801538 2544 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb"} Aug 13 07:15:05.801806 kubelet[2544]: E0813 07:15:05.801617 2544 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"707e0f61-36a7-4b17-96a3-b98a0613f9bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:05.801806 kubelet[2544]: E0813 07:15:05.801652 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"707e0f61-36a7-4b17-96a3-b98a0613f9bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9swvw" podUID="707e0f61-36a7-4b17-96a3-b98a0613f9bd" Aug 13 07:15:06.021213 containerd[1463]: time="2025-08-13T07:15:06.021057246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kc2rx,Uid:2016744c-fc88-45ab-9b7e-62d31ebc5f72,Namespace:calico-system,Attempt:0,}" Aug 13 07:15:06.108427 containerd[1463]: time="2025-08-13T07:15:06.108364913Z" level=error msg="Failed to destroy network for sandbox \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:06.111411 containerd[1463]: time="2025-08-13T07:15:06.108888395Z" level=error msg="encountered an error cleaning up failed sandbox \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:06.111411 containerd[1463]: time="2025-08-13T07:15:06.108964384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kc2rx,Uid:2016744c-fc88-45ab-9b7e-62d31ebc5f72,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:06.111721 kubelet[2544]: E0813 07:15:06.110749 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:06.111721 kubelet[2544]: E0813 07:15:06.110839 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-kc2rx" Aug 13 07:15:06.111721 kubelet[2544]: E0813 07:15:06.110872 2544 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-kc2rx" Aug 13 07:15:06.111919 kubelet[2544]: E0813 07:15:06.110937 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-kc2rx_calico-system(2016744c-fc88-45ab-9b7e-62d31ebc5f72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-kc2rx_calico-system(2016744c-fc88-45ab-9b7e-62d31ebc5f72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-kc2rx" podUID="2016744c-fc88-45ab-9b7e-62d31ebc5f72" Aug 13 07:15:06.113139 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443-shm.mount: Deactivated successfully. Aug 13 07:15:06.131996 containerd[1463]: time="2025-08-13T07:15:06.131922534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77d755b4b8-pfvbp,Uid:37af0644-e76e-4019-8384-f9b67a1e33ea,Namespace:calico-system,Attempt:0,}" Aug 13 07:15:06.222123 containerd[1463]: time="2025-08-13T07:15:06.222054364Z" level=error msg="Failed to destroy network for sandbox \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:06.226222 containerd[1463]: time="2025-08-13T07:15:06.223604934Z" level=error msg="encountered an error cleaning up failed sandbox \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:06.226222 containerd[1463]: time="2025-08-13T07:15:06.223691666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77d755b4b8-pfvbp,Uid:37af0644-e76e-4019-8384-f9b67a1e33ea,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:06.226509 kubelet[2544]: E0813 07:15:06.225681 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:06.226509 kubelet[2544]: E0813 07:15:06.225785 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77d755b4b8-pfvbp" Aug 13 07:15:06.226509 kubelet[2544]: E0813 07:15:06.225818 2544 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77d755b4b8-pfvbp" Aug 13 07:15:06.226727 kubelet[2544]: E0813 07:15:06.225887 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77d755b4b8-pfvbp_calico-system(37af0644-e76e-4019-8384-f9b67a1e33ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77d755b4b8-pfvbp_calico-system(37af0644-e76e-4019-8384-f9b67a1e33ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77d755b4b8-pfvbp" podUID="37af0644-e76e-4019-8384-f9b67a1e33ea" Aug 13 07:15:06.227812 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa-shm.mount: Deactivated successfully. Aug 13 07:15:06.647721 kubelet[2544]: I0813 07:15:06.647682 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Aug 13 07:15:06.650529 containerd[1463]: time="2025-08-13T07:15:06.649257375Z" level=info msg="StopPodSandbox for \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\"" Aug 13 07:15:06.650529 containerd[1463]: time="2025-08-13T07:15:06.649527087Z" level=info msg="Ensure that sandbox 9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa in task-service has been cleanup successfully" Aug 13 07:15:06.662686 kubelet[2544]: I0813 07:15:06.662647 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Aug 13 07:15:06.666260 containerd[1463]: time="2025-08-13T07:15:06.664329822Z" level=info msg="StopPodSandbox for \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\"" Aug 13 07:15:06.666260 containerd[1463]: time="2025-08-13T07:15:06.664711843Z" level=info msg="Ensure that sandbox 73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443 in task-service has been cleanup successfully" Aug 13 07:15:06.734556 containerd[1463]: time="2025-08-13T07:15:06.734494524Z" level=error msg="StopPodSandbox for \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\" failed" error="failed to destroy network for sandbox \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:06.735089 kubelet[2544]: E0813 07:15:06.735038 2544 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Aug 13 07:15:06.735224 kubelet[2544]: E0813 07:15:06.735111 2544 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa"} Aug 13 07:15:06.735224 kubelet[2544]: E0813 07:15:06.735173 2544 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37af0644-e76e-4019-8384-f9b67a1e33ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:06.735417 kubelet[2544]: E0813 07:15:06.735213 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37af0644-e76e-4019-8384-f9b67a1e33ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77d755b4b8-pfvbp" podUID="37af0644-e76e-4019-8384-f9b67a1e33ea" Aug 13 07:15:06.740568 containerd[1463]: time="2025-08-13T07:15:06.740508661Z" level=error msg="StopPodSandbox for \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\" failed" error="failed to destroy network for sandbox \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:06.740974 kubelet[2544]: E0813 07:15:06.740930 2544 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Aug 13 07:15:06.741152 kubelet[2544]: E0813 07:15:06.741126 2544 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443"} Aug 13 07:15:06.741288 kubelet[2544]: E0813 07:15:06.741265 2544 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2016744c-fc88-45ab-9b7e-62d31ebc5f72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:06.741538 kubelet[2544]: E0813 07:15:06.741450 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2016744c-fc88-45ab-9b7e-62d31ebc5f72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-kc2rx" podUID="2016744c-fc88-45ab-9b7e-62d31ebc5f72" Aug 13 07:15:12.250987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2228942739.mount: Deactivated successfully. Aug 13 07:15:12.290851 containerd[1463]: time="2025-08-13T07:15:12.290755350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:12.292270 containerd[1463]: time="2025-08-13T07:15:12.292034561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:15:12.295430 containerd[1463]: time="2025-08-13T07:15:12.293417557Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:12.298525 containerd[1463]: time="2025-08-13T07:15:12.297078825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:12.301339 containerd[1463]: time="2025-08-13T07:15:12.301264768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.700385385s" Aug 13 07:15:12.301502 containerd[1463]: time="2025-08-13T07:15:12.301347442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:15:12.328916 containerd[1463]: time="2025-08-13T07:15:12.328858327Z" level=info msg="CreateContainer within sandbox \"da53f119709f78b36e98da8860795e1ca8f9f40d87d4025c35212c5b8890cd0b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:15:12.356665 containerd[1463]: time="2025-08-13T07:15:12.356519769Z" level=info msg="CreateContainer within sandbox \"da53f119709f78b36e98da8860795e1ca8f9f40d87d4025c35212c5b8890cd0b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b68e7a45410416b79d66ec6a4bacdd8ed896c9b8518ccaeb22b1459213c25560\"" Aug 13 07:15:12.358892 containerd[1463]: time="2025-08-13T07:15:12.358835982Z" level=info msg="StartContainer for \"b68e7a45410416b79d66ec6a4bacdd8ed896c9b8518ccaeb22b1459213c25560\"" Aug 13 07:15:12.401710 systemd[1]: Started cri-containerd-b68e7a45410416b79d66ec6a4bacdd8ed896c9b8518ccaeb22b1459213c25560.scope - libcontainer container b68e7a45410416b79d66ec6a4bacdd8ed896c9b8518ccaeb22b1459213c25560. Aug 13 07:15:12.453781 containerd[1463]: time="2025-08-13T07:15:12.453698287Z" level=info msg="StartContainer for \"b68e7a45410416b79d66ec6a4bacdd8ed896c9b8518ccaeb22b1459213c25560\" returns successfully" Aug 13 07:15:12.587177 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:15:12.587380 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:15:12.761510 kubelet[2544]: I0813 07:15:12.759981 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2fglh" podStartSLOduration=1.240053447 podStartE2EDuration="18.759938524s" podCreationTimestamp="2025-08-13 07:14:54 +0000 UTC" firstStartedPulling="2025-08-13 07:14:54.784084289 +0000 UTC m=+22.474705496" lastFinishedPulling="2025-08-13 07:15:12.303969351 +0000 UTC m=+39.994590573" observedRunningTime="2025-08-13 07:15:12.753374268 +0000 UTC m=+40.443995496" watchObservedRunningTime="2025-08-13 07:15:12.759938524 +0000 UTC m=+40.450559753" Aug 13 07:15:12.775992 containerd[1463]: time="2025-08-13T07:15:12.775919379Z" level=info msg="StopPodSandbox for \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\"" Aug 13 07:15:12.958920 containerd[1463]: 2025-08-13 07:15:12.877 [INFO][3774] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Aug 13 07:15:12.958920 containerd[1463]: 2025-08-13 07:15:12.878 [INFO][3774] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" iface="eth0" netns="/var/run/netns/cni-dec3b068-101f-166c-c50b-9bb7d1ad0e52" Aug 13 07:15:12.958920 containerd[1463]: 2025-08-13 07:15:12.878 [INFO][3774] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" iface="eth0" netns="/var/run/netns/cni-dec3b068-101f-166c-c50b-9bb7d1ad0e52" Aug 13 07:15:12.958920 containerd[1463]: 2025-08-13 07:15:12.879 [INFO][3774] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" iface="eth0" netns="/var/run/netns/cni-dec3b068-101f-166c-c50b-9bb7d1ad0e52" Aug 13 07:15:12.958920 containerd[1463]: 2025-08-13 07:15:12.879 [INFO][3774] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Aug 13 07:15:12.958920 containerd[1463]: 2025-08-13 07:15:12.879 [INFO][3774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Aug 13 07:15:12.958920 containerd[1463]: 2025-08-13 07:15:12.930 [INFO][3790] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" HandleID="k8s-pod-network.9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--77d755b4b8--pfvbp-eth0" Aug 13 07:15:12.958920 containerd[1463]: 2025-08-13 07:15:12.932 [INFO][3790] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:12.958920 containerd[1463]: 2025-08-13 07:15:12.932 [INFO][3790] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:12.958920 containerd[1463]: 2025-08-13 07:15:12.943 [WARNING][3790] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" HandleID="k8s-pod-network.9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--77d755b4b8--pfvbp-eth0" Aug 13 07:15:12.958920 containerd[1463]: 2025-08-13 07:15:12.944 [INFO][3790] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" HandleID="k8s-pod-network.9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--77d755b4b8--pfvbp-eth0" Aug 13 07:15:12.958920 containerd[1463]: 2025-08-13 07:15:12.949 [INFO][3790] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:12.958920 containerd[1463]: 2025-08-13 07:15:12.955 [INFO][3774] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Aug 13 07:15:12.961264 containerd[1463]: time="2025-08-13T07:15:12.960570191Z" level=info msg="TearDown network for sandbox \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\" successfully" Aug 13 07:15:12.961264 containerd[1463]: time="2025-08-13T07:15:12.960631029Z" level=info msg="StopPodSandbox for \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\" returns successfully" Aug 13 07:15:13.167790 kubelet[2544]: I0813 07:15:13.167046 2544 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/37af0644-e76e-4019-8384-f9b67a1e33ea-whisker-backend-key-pair\") pod \"37af0644-e76e-4019-8384-f9b67a1e33ea\" (UID: \"37af0644-e76e-4019-8384-f9b67a1e33ea\") " Aug 13 07:15:13.167790 kubelet[2544]: I0813 07:15:13.167115 2544 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37af0644-e76e-4019-8384-f9b67a1e33ea-whisker-ca-bundle\") pod \"37af0644-e76e-4019-8384-f9b67a1e33ea\" (UID: \"37af0644-e76e-4019-8384-f9b67a1e33ea\") " Aug 13 07:15:13.167790 kubelet[2544]: I0813 07:15:13.167168 2544 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d75pd\" (UniqueName: \"kubernetes.io/projected/37af0644-e76e-4019-8384-f9b67a1e33ea-kube-api-access-d75pd\") pod \"37af0644-e76e-4019-8384-f9b67a1e33ea\" (UID: \"37af0644-e76e-4019-8384-f9b67a1e33ea\") " Aug 13 07:15:13.172144 kubelet[2544]: I0813 07:15:13.171826 2544 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37af0644-e76e-4019-8384-f9b67a1e33ea-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "37af0644-e76e-4019-8384-f9b67a1e33ea" (UID: "37af0644-e76e-4019-8384-f9b67a1e33ea"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:15:13.174404 kubelet[2544]: I0813 07:15:13.174337 2544 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37af0644-e76e-4019-8384-f9b67a1e33ea-kube-api-access-d75pd" (OuterVolumeSpecName: "kube-api-access-d75pd") pod "37af0644-e76e-4019-8384-f9b67a1e33ea" (UID: "37af0644-e76e-4019-8384-f9b67a1e33ea"). InnerVolumeSpecName "kube-api-access-d75pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:15:13.174889 kubelet[2544]: I0813 07:15:13.174849 2544 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37af0644-e76e-4019-8384-f9b67a1e33ea-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "37af0644-e76e-4019-8384-f9b67a1e33ea" (UID: "37af0644-e76e-4019-8384-f9b67a1e33ea"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 07:15:13.248804 systemd[1]: run-netns-cni\x2ddec3b068\x2d101f\x2d166c\x2dc50b\x2d9bb7d1ad0e52.mount: Deactivated successfully. Aug 13 07:15:13.248972 systemd[1]: var-lib-kubelet-pods-37af0644\x2de76e\x2d4019\x2d8384\x2df9b67a1e33ea-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:15:13.249103 systemd[1]: var-lib-kubelet-pods-37af0644\x2de76e\x2d4019\x2d8384\x2df9b67a1e33ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd75pd.mount: Deactivated successfully. Aug 13 07:15:13.268322 kubelet[2544]: I0813 07:15:13.268245 2544 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d75pd\" (UniqueName: \"kubernetes.io/projected/37af0644-e76e-4019-8384-f9b67a1e33ea-kube-api-access-d75pd\") on node \"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 07:15:13.268322 kubelet[2544]: I0813 07:15:13.268300 2544 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/37af0644-e76e-4019-8384-f9b67a1e33ea-whisker-backend-key-pair\") on node \"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 07:15:13.268322 kubelet[2544]: I0813 07:15:13.268319 2544 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37af0644-e76e-4019-8384-f9b67a1e33ea-whisker-ca-bundle\") on node \"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 07:15:13.706713 systemd[1]: Removed slice kubepods-besteffort-pod37af0644_e76e_4019_8384_f9b67a1e33ea.slice - libcontainer container kubepods-besteffort-pod37af0644_e76e_4019_8384_f9b67a1e33ea.slice. Aug 13 07:15:13.818943 systemd[1]: Created slice kubepods-besteffort-podde708d80_6a60_4bbc_b289_35d0d689d019.slice - libcontainer container kubepods-besteffort-podde708d80_6a60_4bbc_b289_35d0d689d019.slice. Aug 13 07:15:13.973268 kubelet[2544]: I0813 07:15:13.973217 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h54rh\" (UniqueName: \"kubernetes.io/projected/de708d80-6a60-4bbc-b289-35d0d689d019-kube-api-access-h54rh\") pod \"whisker-6f8c6b6676-nbjbm\" (UID: \"de708d80-6a60-4bbc-b289-35d0d689d019\") " pod="calico-system/whisker-6f8c6b6676-nbjbm" Aug 13 07:15:13.974163 kubelet[2544]: I0813 07:15:13.973391 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de708d80-6a60-4bbc-b289-35d0d689d019-whisker-backend-key-pair\") pod \"whisker-6f8c6b6676-nbjbm\" (UID: \"de708d80-6a60-4bbc-b289-35d0d689d019\") " pod="calico-system/whisker-6f8c6b6676-nbjbm" Aug 13 07:15:13.974163 kubelet[2544]: I0813 07:15:13.973651 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de708d80-6a60-4bbc-b289-35d0d689d019-whisker-ca-bundle\") pod \"whisker-6f8c6b6676-nbjbm\" (UID: \"de708d80-6a60-4bbc-b289-35d0d689d019\") " pod="calico-system/whisker-6f8c6b6676-nbjbm" Aug 13 07:15:14.124027 containerd[1463]: time="2025-08-13T07:15:14.123967555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f8c6b6676-nbjbm,Uid:de708d80-6a60-4bbc-b289-35d0d689d019,Namespace:calico-system,Attempt:0,}" Aug 13 07:15:14.364656 systemd-networkd[1378]: cali427eb7a702d: Link UP Aug 13 07:15:14.365060 systemd-networkd[1378]: cali427eb7a702d: Gained carrier Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.173 [INFO][3837] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.200 [INFO][3837] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--6f8c6b6676--nbjbm-eth0 whisker-6f8c6b6676- calico-system de708d80-6a60-4bbc-b289-35d0d689d019 913 0 2025-08-13 07:15:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f8c6b6676 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal whisker-6f8c6b6676-nbjbm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali427eb7a702d [] [] }} ContainerID="a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" Namespace="calico-system" Pod="whisker-6f8c6b6676-nbjbm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--6f8c6b6676--nbjbm-" Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.201 [INFO][3837] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" Namespace="calico-system" Pod="whisker-6f8c6b6676-nbjbm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--6f8c6b6676--nbjbm-eth0" Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.269 [INFO][3874] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" HandleID="k8s-pod-network.a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--6f8c6b6676--nbjbm-eth0" Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.270 [INFO][3874] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" HandleID="k8s-pod-network.a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--6f8c6b6676--nbjbm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5960), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", "pod":"whisker-6f8c6b6676-nbjbm", "timestamp":"2025-08-13 07:15:14.269783524 +0000 UTC"}, Hostname:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.270 [INFO][3874] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.270 [INFO][3874] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.270 [INFO][3874] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.282 [INFO][3874] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.290 [INFO][3874] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.299 [INFO][3874] ipam/ipam.go 511: Trying affinity for 192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.302 [INFO][3874] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.307 [INFO][3874] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.307 [INFO][3874] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.128/26 handle="k8s-pod-network.a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.309 [INFO][3874] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6 Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.320 [INFO][3874] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.128/26 handle="k8s-pod-network.a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.332 [INFO][3874] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.129/26] block=192.168.94.128/26 handle="k8s-pod-network.a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.333 [INFO][3874] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.129/26] handle="k8s-pod-network.a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.334 [INFO][3874] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:14.402712 containerd[1463]: 2025-08-13 07:15:14.334 [INFO][3874] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.129/26] IPv6=[] ContainerID="a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" HandleID="k8s-pod-network.a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--6f8c6b6676--nbjbm-eth0" Aug 13 07:15:14.403970 containerd[1463]: 2025-08-13 07:15:14.340 [INFO][3837] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" Namespace="calico-system" Pod="whisker-6f8c6b6676-nbjbm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--6f8c6b6676--nbjbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--6f8c6b6676--nbjbm-eth0", GenerateName:"whisker-6f8c6b6676-", Namespace:"calico-system", SelfLink:"", UID:"de708d80-6a60-4bbc-b289-35d0d689d019", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f8c6b6676", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-6f8c6b6676-nbjbm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.94.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali427eb7a702d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:14.403970 containerd[1463]: 2025-08-13 07:15:14.340 [INFO][3837] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.129/32] ContainerID="a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" Namespace="calico-system" Pod="whisker-6f8c6b6676-nbjbm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--6f8c6b6676--nbjbm-eth0" Aug 13 07:15:14.403970 containerd[1463]: 2025-08-13 07:15:14.340 [INFO][3837] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali427eb7a702d ContainerID="a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" Namespace="calico-system" Pod="whisker-6f8c6b6676-nbjbm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--6f8c6b6676--nbjbm-eth0" Aug 13 07:15:14.403970 containerd[1463]: 2025-08-13 07:15:14.367 [INFO][3837] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" Namespace="calico-system" Pod="whisker-6f8c6b6676-nbjbm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--6f8c6b6676--nbjbm-eth0" Aug 13 07:15:14.403970 containerd[1463]: 2025-08-13 07:15:14.368 [INFO][3837] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" Namespace="calico-system" Pod="whisker-6f8c6b6676-nbjbm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--6f8c6b6676--nbjbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--6f8c6b6676--nbjbm-eth0", GenerateName:"whisker-6f8c6b6676-", Namespace:"calico-system", SelfLink:"", UID:"de708d80-6a60-4bbc-b289-35d0d689d019", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f8c6b6676", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6", Pod:"whisker-6f8c6b6676-nbjbm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.94.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali427eb7a702d", MAC:"02:21:0e:22:d7:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:14.403970 containerd[1463]: 2025-08-13 07:15:14.397 [INFO][3837] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6" Namespace="calico-system" Pod="whisker-6f8c6b6676-nbjbm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--6f8c6b6676--nbjbm-eth0" Aug 13 07:15:14.456689 containerd[1463]: time="2025-08-13T07:15:14.456562694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:14.457147 containerd[1463]: time="2025-08-13T07:15:14.456899997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:14.457147 containerd[1463]: time="2025-08-13T07:15:14.456966631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:14.457838 containerd[1463]: time="2025-08-13T07:15:14.457123992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:14.482942 kubelet[2544]: I0813 07:15:14.469903 2544 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37af0644-e76e-4019-8384-f9b67a1e33ea" path="/var/lib/kubelet/pods/37af0644-e76e-4019-8384-f9b67a1e33ea/volumes" Aug 13 07:15:14.526057 systemd[1]: Started cri-containerd-a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6.scope - libcontainer container a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6. Aug 13 07:15:14.683057 containerd[1463]: time="2025-08-13T07:15:14.682896163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f8c6b6676-nbjbm,Uid:de708d80-6a60-4bbc-b289-35d0d689d019,Namespace:calico-system,Attempt:0,} returns sandbox id \"a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6\"" Aug 13 07:15:14.688535 containerd[1463]: time="2025-08-13T07:15:14.687644641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:15:15.767747 containerd[1463]: time="2025-08-13T07:15:15.767670571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:15.769104 containerd[1463]: time="2025-08-13T07:15:15.769042237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:15:15.770746 containerd[1463]: time="2025-08-13T07:15:15.770666068Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:15.774491 containerd[1463]: time="2025-08-13T07:15:15.774379357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:15.775764 containerd[1463]: time="2025-08-13T07:15:15.775579355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.087876122s" Aug 13 07:15:15.775764 containerd[1463]: time="2025-08-13T07:15:15.775629605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:15:15.780821 containerd[1463]: time="2025-08-13T07:15:15.780764137Z" level=info msg="CreateContainer within sandbox \"a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:15:15.799367 containerd[1463]: time="2025-08-13T07:15:15.799247690Z" level=info msg="CreateContainer within sandbox \"a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4c5358e8a3b5dceaf973cf40859255ed76b04e1dc484dd2cb72cf8b29a232c78\"" Aug 13 07:15:15.800263 containerd[1463]: time="2025-08-13T07:15:15.800221382Z" level=info msg="StartContainer for \"4c5358e8a3b5dceaf973cf40859255ed76b04e1dc484dd2cb72cf8b29a232c78\"" Aug 13 07:15:15.849741 systemd[1]: Started cri-containerd-4c5358e8a3b5dceaf973cf40859255ed76b04e1dc484dd2cb72cf8b29a232c78.scope - libcontainer container 4c5358e8a3b5dceaf973cf40859255ed76b04e1dc484dd2cb72cf8b29a232c78. Aug 13 07:15:15.922642 containerd[1463]: time="2025-08-13T07:15:15.922516691Z" level=info msg="StartContainer for \"4c5358e8a3b5dceaf973cf40859255ed76b04e1dc484dd2cb72cf8b29a232c78\" returns successfully" Aug 13 07:15:15.925538 containerd[1463]: time="2025-08-13T07:15:15.924823095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:15:16.412717 systemd-networkd[1378]: cali427eb7a702d: Gained IPv6LL Aug 13 07:15:16.458221 containerd[1463]: time="2025-08-13T07:15:16.458070272Z" level=info msg="StopPodSandbox for \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\"" Aug 13 07:15:16.576042 containerd[1463]: 2025-08-13 07:15:16.524 [INFO][4072] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Aug 13 07:15:16.576042 containerd[1463]: 2025-08-13 07:15:16.524 [INFO][4072] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" iface="eth0" netns="/var/run/netns/cni-f63658ca-6d24-520b-01eb-f439d6266341" Aug 13 07:15:16.576042 containerd[1463]: 2025-08-13 07:15:16.525 [INFO][4072] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" iface="eth0" netns="/var/run/netns/cni-f63658ca-6d24-520b-01eb-f439d6266341" Aug 13 07:15:16.576042 containerd[1463]: 2025-08-13 07:15:16.525 [INFO][4072] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" iface="eth0" netns="/var/run/netns/cni-f63658ca-6d24-520b-01eb-f439d6266341" Aug 13 07:15:16.576042 containerd[1463]: 2025-08-13 07:15:16.525 [INFO][4072] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Aug 13 07:15:16.576042 containerd[1463]: 2025-08-13 07:15:16.525 [INFO][4072] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Aug 13 07:15:16.576042 containerd[1463]: 2025-08-13 07:15:16.555 [INFO][4079] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" HandleID="k8s-pod-network.d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:16.576042 containerd[1463]: 2025-08-13 07:15:16.555 [INFO][4079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:16.576042 containerd[1463]: 2025-08-13 07:15:16.556 [INFO][4079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:16.576042 containerd[1463]: 2025-08-13 07:15:16.567 [WARNING][4079] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" HandleID="k8s-pod-network.d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:16.576042 containerd[1463]: 2025-08-13 07:15:16.567 [INFO][4079] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" HandleID="k8s-pod-network.d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:16.576042 containerd[1463]: 2025-08-13 07:15:16.570 [INFO][4079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:16.576042 containerd[1463]: 2025-08-13 07:15:16.572 [INFO][4072] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Aug 13 07:15:16.576042 containerd[1463]: time="2025-08-13T07:15:16.574894148Z" level=info msg="TearDown network for sandbox \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\" successfully" Aug 13 07:15:16.576042 containerd[1463]: time="2025-08-13T07:15:16.574937825Z" level=info msg="StopPodSandbox for \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\" returns successfully" Aug 13 07:15:16.579124 containerd[1463]: time="2025-08-13T07:15:16.577510357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qt8wq,Uid:227e33f7-b859-4c0c-b764-8eade43b88be,Namespace:kube-system,Attempt:1,}" Aug 13 07:15:16.581162 systemd[1]: run-netns-cni\x2df63658ca\x2d6d24\x2d520b\x2d01eb\x2df439d6266341.mount: Deactivated successfully. Aug 13 07:15:16.740609 systemd-networkd[1378]: cali30a2d438b07: Link UP Aug 13 07:15:16.742206 systemd-networkd[1378]: cali30a2d438b07: Gained carrier Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.627 [INFO][4085] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.648 [INFO][4085] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0 coredns-7c65d6cfc9- kube-system 227e33f7-b859-4c0c-b764-8eade43b88be 929 0 2025-08-13 07:14:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal coredns-7c65d6cfc9-qt8wq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali30a2d438b07 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt8wq" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-" Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.650 [INFO][4085] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt8wq" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.686 [INFO][4097] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" HandleID="k8s-pod-network.0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.686 [INFO][4097] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" HandleID="k8s-pod-network.0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f210), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", "pod":"coredns-7c65d6cfc9-qt8wq", "timestamp":"2025-08-13 07:15:16.686620299 +0000 UTC"}, Hostname:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.686 [INFO][4097] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.687 [INFO][4097] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.687 [INFO][4097] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.697 [INFO][4097] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.706 [INFO][4097] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.712 [INFO][4097] ipam/ipam.go 511: Trying affinity for 192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.714 [INFO][4097] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.717 [INFO][4097] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.717 [INFO][4097] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.128/26 handle="k8s-pod-network.0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.719 [INFO][4097] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58 Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.725 [INFO][4097] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.128/26 handle="k8s-pod-network.0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.734 [INFO][4097] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.130/26] block=192.168.94.128/26 handle="k8s-pod-network.0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.734 [INFO][4097] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.130/26] handle="k8s-pod-network.0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.734 [INFO][4097] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:16.775061 containerd[1463]: 2025-08-13 07:15:16.734 [INFO][4097] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.130/26] IPv6=[] ContainerID="0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" HandleID="k8s-pod-network.0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:16.777904 containerd[1463]: 2025-08-13 07:15:16.736 [INFO][4085] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt8wq" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"227e33f7-b859-4c0c-b764-8eade43b88be", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7c65d6cfc9-qt8wq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali30a2d438b07", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:16.777904 containerd[1463]: 2025-08-13 07:15:16.736 [INFO][4085] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.130/32] ContainerID="0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt8wq" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:16.777904 containerd[1463]: 2025-08-13 07:15:16.736 [INFO][4085] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30a2d438b07 ContainerID="0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt8wq" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:16.777904 containerd[1463]: 2025-08-13 07:15:16.742 [INFO][4085] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt8wq" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:16.777904 containerd[1463]: 2025-08-13 07:15:16.744 [INFO][4085] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt8wq" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"227e33f7-b859-4c0c-b764-8eade43b88be", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58", Pod:"coredns-7c65d6cfc9-qt8wq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali30a2d438b07", MAC:"3e:e5:48:aa:37:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:16.777904 containerd[1463]: 2025-08-13 07:15:16.770 [INFO][4085] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt8wq" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:16.822672 containerd[1463]: time="2025-08-13T07:15:16.820793600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:16.823122 containerd[1463]: time="2025-08-13T07:15:16.822938512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:16.823122 containerd[1463]: time="2025-08-13T07:15:16.823022810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:16.823865 containerd[1463]: time="2025-08-13T07:15:16.823731521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:16.869721 systemd[1]: Started cri-containerd-0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58.scope - libcontainer container 0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58. Aug 13 07:15:16.944548 containerd[1463]: time="2025-08-13T07:15:16.944284517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qt8wq,Uid:227e33f7-b859-4c0c-b764-8eade43b88be,Namespace:kube-system,Attempt:1,} returns sandbox id \"0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58\"" Aug 13 07:15:16.949923 containerd[1463]: time="2025-08-13T07:15:16.949866299Z" level=info msg="CreateContainer within sandbox \"0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:15:16.991633 containerd[1463]: time="2025-08-13T07:15:16.991020346Z" level=info msg="CreateContainer within sandbox \"0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"292864738e160be470b9db2192907496115e2e7fef105488b9c91d232ef048fa\"" Aug 13 07:15:16.993491 containerd[1463]: time="2025-08-13T07:15:16.992008748Z" level=info msg="StartContainer for \"292864738e160be470b9db2192907496115e2e7fef105488b9c91d232ef048fa\"" Aug 13 07:15:17.049830 systemd[1]: Started cri-containerd-292864738e160be470b9db2192907496115e2e7fef105488b9c91d232ef048fa.scope - libcontainer container 292864738e160be470b9db2192907496115e2e7fef105488b9c91d232ef048fa. Aug 13 07:15:17.152940 containerd[1463]: time="2025-08-13T07:15:17.152882314Z" level=info msg="StartContainer for \"292864738e160be470b9db2192907496115e2e7fef105488b9c91d232ef048fa\" returns successfully" Aug 13 07:15:17.457832 containerd[1463]: time="2025-08-13T07:15:17.457061601Z" level=info msg="StopPodSandbox for \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\"" Aug 13 07:15:17.667616 containerd[1463]: 2025-08-13 07:15:17.599 [INFO][4222] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Aug 13 07:15:17.667616 containerd[1463]: 2025-08-13 07:15:17.600 [INFO][4222] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" iface="eth0" netns="/var/run/netns/cni-69313d7c-7d94-0f96-040f-9dc19bc6c9de" Aug 13 07:15:17.667616 containerd[1463]: 2025-08-13 07:15:17.601 [INFO][4222] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" iface="eth0" netns="/var/run/netns/cni-69313d7c-7d94-0f96-040f-9dc19bc6c9de" Aug 13 07:15:17.667616 containerd[1463]: 2025-08-13 07:15:17.602 [INFO][4222] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" iface="eth0" netns="/var/run/netns/cni-69313d7c-7d94-0f96-040f-9dc19bc6c9de" Aug 13 07:15:17.667616 containerd[1463]: 2025-08-13 07:15:17.602 [INFO][4222] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Aug 13 07:15:17.667616 containerd[1463]: 2025-08-13 07:15:17.602 [INFO][4222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Aug 13 07:15:17.667616 containerd[1463]: 2025-08-13 07:15:17.648 [INFO][4229] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" HandleID="k8s-pod-network.d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:17.667616 containerd[1463]: 2025-08-13 07:15:17.649 [INFO][4229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:17.667616 containerd[1463]: 2025-08-13 07:15:17.649 [INFO][4229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:17.667616 containerd[1463]: 2025-08-13 07:15:17.660 [WARNING][4229] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" HandleID="k8s-pod-network.d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:17.667616 containerd[1463]: 2025-08-13 07:15:17.660 [INFO][4229] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" HandleID="k8s-pod-network.d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:17.667616 containerd[1463]: 2025-08-13 07:15:17.662 [INFO][4229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:17.667616 containerd[1463]: 2025-08-13 07:15:17.664 [INFO][4222] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Aug 13 07:15:17.668406 containerd[1463]: time="2025-08-13T07:15:17.668036090Z" level=info msg="TearDown network for sandbox \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\" successfully" Aug 13 07:15:17.668406 containerd[1463]: time="2025-08-13T07:15:17.668076823Z" level=info msg="StopPodSandbox for \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\" returns successfully" Aug 13 07:15:17.670357 containerd[1463]: time="2025-08-13T07:15:17.670314882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694b6c886d-bxdxg,Uid:d9138eb6-4b16-4ca5-add5-a7780c9f57fc,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:15:17.791220 kubelet[2544]: I0813 07:15:17.791139 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qt8wq" podStartSLOduration=39.791108436 podStartE2EDuration="39.791108436s" podCreationTimestamp="2025-08-13 07:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:15:17.750213442 +0000 UTC m=+45.440834671" watchObservedRunningTime="2025-08-13 07:15:17.791108436 +0000 UTC m=+45.481729664" Aug 13 07:15:17.810247 systemd[1]: run-netns-cni\x2d69313d7c\x2d7d94\x2d0f96\x2d040f\x2d9dc19bc6c9de.mount: Deactivated successfully. Aug 13 07:15:17.886021 systemd-networkd[1378]: cali30a2d438b07: Gained IPv6LL Aug 13 07:15:17.965647 systemd-networkd[1378]: calif803dbe35f4: Link UP Aug 13 07:15:17.968054 systemd-networkd[1378]: calif803dbe35f4: Gained carrier Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.750 [INFO][4235] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.782 [INFO][4235] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0 calico-apiserver-694b6c886d- calico-apiserver d9138eb6-4b16-4ca5-add5-a7780c9f57fc 938 0 2025-08-13 07:14:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:694b6c886d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal calico-apiserver-694b6c886d-bxdxg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif803dbe35f4 [] [] }} ContainerID="c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-bxdxg" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-" Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.782 [INFO][4235] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-bxdxg" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.891 [INFO][4251] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" HandleID="k8s-pod-network.c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.891 [INFO][4251] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" HandleID="k8s-pod-network.c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000375900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", "pod":"calico-apiserver-694b6c886d-bxdxg", "timestamp":"2025-08-13 07:15:17.891089127 +0000 UTC"}, Hostname:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.891 [INFO][4251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.891 [INFO][4251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.891 [INFO][4251] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.909 [INFO][4251] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.919 [INFO][4251] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.925 [INFO][4251] ipam/ipam.go 511: Trying affinity for 192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.927 [INFO][4251] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.933 [INFO][4251] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.933 [INFO][4251] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.128/26 handle="k8s-pod-network.c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.935 [INFO][4251] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4 Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.944 [INFO][4251] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.128/26 handle="k8s-pod-network.c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.955 [INFO][4251] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.131/26] block=192.168.94.128/26 handle="k8s-pod-network.c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.955 [INFO][4251] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.131/26] handle="k8s-pod-network.c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.956 [INFO][4251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:18.000701 containerd[1463]: 2025-08-13 07:15:17.956 [INFO][4251] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.131/26] IPv6=[] ContainerID="c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" HandleID="k8s-pod-network.c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:18.003000 containerd[1463]: 2025-08-13 07:15:17.961 [INFO][4235] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-bxdxg" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0", GenerateName:"calico-apiserver-694b6c886d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9138eb6-4b16-4ca5-add5-a7780c9f57fc", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694b6c886d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-694b6c886d-bxdxg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif803dbe35f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:18.003000 containerd[1463]: 2025-08-13 07:15:17.961 [INFO][4235] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.131/32] ContainerID="c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-bxdxg" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:18.003000 containerd[1463]: 2025-08-13 07:15:17.961 [INFO][4235] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif803dbe35f4 ContainerID="c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-bxdxg" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:18.003000 containerd[1463]: 2025-08-13 07:15:17.969 [INFO][4235] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-bxdxg" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:18.003000 containerd[1463]: 2025-08-13 07:15:17.969 [INFO][4235] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-bxdxg" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0", GenerateName:"calico-apiserver-694b6c886d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9138eb6-4b16-4ca5-add5-a7780c9f57fc", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694b6c886d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4", Pod:"calico-apiserver-694b6c886d-bxdxg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif803dbe35f4", MAC:"f2:ac:9a:bc:bf:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:18.003000 containerd[1463]: 2025-08-13 07:15:17.996 [INFO][4235] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-bxdxg" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:18.040434 containerd[1463]: time="2025-08-13T07:15:18.039054510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:18.040434 containerd[1463]: time="2025-08-13T07:15:18.040322126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:18.040434 containerd[1463]: time="2025-08-13T07:15:18.040344736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:18.043509 containerd[1463]: time="2025-08-13T07:15:18.040898910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:18.092629 systemd[1]: run-containerd-runc-k8s.io-c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4-runc.96h4AW.mount: Deactivated successfully. Aug 13 07:15:18.103691 systemd[1]: Started cri-containerd-c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4.scope - libcontainer container c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4. Aug 13 07:15:18.222340 containerd[1463]: time="2025-08-13T07:15:18.222290872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694b6c886d-bxdxg,Uid:d9138eb6-4b16-4ca5-add5-a7780c9f57fc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4\"" Aug 13 07:15:18.462775 containerd[1463]: time="2025-08-13T07:15:18.461905466Z" level=info msg="StopPodSandbox for \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\"" Aug 13 07:15:18.720547 containerd[1463]: 2025-08-13 07:15:18.616 [INFO][4321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Aug 13 07:15:18.720547 containerd[1463]: 2025-08-13 07:15:18.616 [INFO][4321] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" iface="eth0" netns="/var/run/netns/cni-0ff4aab9-3c0f-47f2-29f4-5b4cb1e928d6" Aug 13 07:15:18.720547 containerd[1463]: 2025-08-13 07:15:18.616 [INFO][4321] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" iface="eth0" netns="/var/run/netns/cni-0ff4aab9-3c0f-47f2-29f4-5b4cb1e928d6" Aug 13 07:15:18.720547 containerd[1463]: 2025-08-13 07:15:18.618 [INFO][4321] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" iface="eth0" netns="/var/run/netns/cni-0ff4aab9-3c0f-47f2-29f4-5b4cb1e928d6" Aug 13 07:15:18.720547 containerd[1463]: 2025-08-13 07:15:18.618 [INFO][4321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Aug 13 07:15:18.720547 containerd[1463]: 2025-08-13 07:15:18.618 [INFO][4321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Aug 13 07:15:18.720547 containerd[1463]: 2025-08-13 07:15:18.692 [INFO][4335] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" HandleID="k8s-pod-network.a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:18.720547 containerd[1463]: 2025-08-13 07:15:18.693 [INFO][4335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:18.720547 containerd[1463]: 2025-08-13 07:15:18.694 [INFO][4335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:18.720547 containerd[1463]: 2025-08-13 07:15:18.707 [WARNING][4335] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" HandleID="k8s-pod-network.a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:18.720547 containerd[1463]: 2025-08-13 07:15:18.707 [INFO][4335] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" HandleID="k8s-pod-network.a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:18.720547 containerd[1463]: 2025-08-13 07:15:18.711 [INFO][4335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:18.720547 containerd[1463]: 2025-08-13 07:15:18.714 [INFO][4321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Aug 13 07:15:18.721294 containerd[1463]: time="2025-08-13T07:15:18.720654034Z" level=info msg="TearDown network for sandbox \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\" successfully" Aug 13 07:15:18.721294 containerd[1463]: time="2025-08-13T07:15:18.720696027Z" level=info msg="StopPodSandbox for \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\" returns successfully" Aug 13 07:15:18.730203 systemd[1]: run-netns-cni\x2d0ff4aab9\x2d3c0f\x2d47f2\x2d29f4\x2d5b4cb1e928d6.mount: Deactivated successfully. Aug 13 07:15:18.732941 containerd[1463]: time="2025-08-13T07:15:18.731901012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9swvw,Uid:707e0f61-36a7-4b17-96a3-b98a0613f9bd,Namespace:calico-system,Attempt:1,}" Aug 13 07:15:18.802450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1464217650.mount: Deactivated successfully. Aug 13 07:15:18.816806 containerd[1463]: time="2025-08-13T07:15:18.816746826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:18.819569 containerd[1463]: time="2025-08-13T07:15:18.819317127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:15:18.822733 containerd[1463]: time="2025-08-13T07:15:18.822572212Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:18.835248 containerd[1463]: time="2025-08-13T07:15:18.835184502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:18.843601 containerd[1463]: time="2025-08-13T07:15:18.840175981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.9152759s" Aug 13 07:15:18.843601 containerd[1463]: time="2025-08-13T07:15:18.840230067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:15:18.850430 containerd[1463]: time="2025-08-13T07:15:18.850382891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:15:18.855979 containerd[1463]: time="2025-08-13T07:15:18.855347729Z" level=info msg="CreateContainer within sandbox \"a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:15:18.896951 containerd[1463]: time="2025-08-13T07:15:18.896767271Z" level=info msg="CreateContainer within sandbox \"a03d336a3ac0ac5f58e49f290f4e9ef66e5ea4c876b873c5a6ce18f93a1405f6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"03edabfa1023c6241a0edebb47fa2268d7eaf1cadf6aa6b06d02db8bffd15322\"" Aug 13 07:15:18.903169 containerd[1463]: time="2025-08-13T07:15:18.901630468Z" level=info msg="StartContainer for \"03edabfa1023c6241a0edebb47fa2268d7eaf1cadf6aa6b06d02db8bffd15322\"" Aug 13 07:15:18.984718 systemd[1]: Started cri-containerd-03edabfa1023c6241a0edebb47fa2268d7eaf1cadf6aa6b06d02db8bffd15322.scope - libcontainer container 03edabfa1023c6241a0edebb47fa2268d7eaf1cadf6aa6b06d02db8bffd15322. Aug 13 07:15:19.037864 systemd-networkd[1378]: calie446ef15128: Link UP Aug 13 07:15:19.039818 systemd-networkd[1378]: calie446ef15128: Gained carrier Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:18.824 [INFO][4348] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:18.862 [INFO][4348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0 csi-node-driver- calico-system 707e0f61-36a7-4b17-96a3-b98a0613f9bd 954 0 2025-08-13 07:14:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal csi-node-driver-9swvw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie446ef15128 [] [] }} ContainerID="98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" Namespace="calico-system" Pod="csi-node-driver-9swvw" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-" Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:18.862 [INFO][4348] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" Namespace="calico-system" Pod="csi-node-driver-9swvw" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:18.961 [INFO][4367] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" HandleID="k8s-pod-network.98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:18.961 [INFO][4367] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" HandleID="k8s-pod-network.98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000390610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", "pod":"csi-node-driver-9swvw", "timestamp":"2025-08-13 07:15:18.961346253 +0000 UTC"}, Hostname:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:18.962 [INFO][4367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:18.962 [INFO][4367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:18.962 [INFO][4367] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:18.984 [INFO][4367] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:18.993 [INFO][4367] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:19.000 [INFO][4367] ipam/ipam.go 511: Trying affinity for 192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:19.003 [INFO][4367] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:19.007 [INFO][4367] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:19.007 [INFO][4367] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.128/26 handle="k8s-pod-network.98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:19.009 [INFO][4367] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170 Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:19.016 [INFO][4367] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.128/26 handle="k8s-pod-network.98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:19.029 [INFO][4367] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.132/26] block=192.168.94.128/26 handle="k8s-pod-network.98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:19.030 [INFO][4367] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.132/26] handle="k8s-pod-network.98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:19.030 [INFO][4367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:19.064239 containerd[1463]: 2025-08-13 07:15:19.030 [INFO][4367] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.132/26] IPv6=[] ContainerID="98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" HandleID="k8s-pod-network.98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:19.066066 containerd[1463]: 2025-08-13 07:15:19.033 [INFO][4348] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" Namespace="calico-system" Pod="csi-node-driver-9swvw" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"707e0f61-36a7-4b17-96a3-b98a0613f9bd", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-9swvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie446ef15128", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:19.066066 containerd[1463]: 2025-08-13 07:15:19.033 [INFO][4348] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.132/32] ContainerID="98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" Namespace="calico-system" Pod="csi-node-driver-9swvw" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:19.066066 containerd[1463]: 2025-08-13 07:15:19.033 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie446ef15128 ContainerID="98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" Namespace="calico-system" Pod="csi-node-driver-9swvw" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:19.066066 containerd[1463]: 2025-08-13 07:15:19.040 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" Namespace="calico-system" Pod="csi-node-driver-9swvw" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:19.066066 containerd[1463]: 2025-08-13 07:15:19.041 [INFO][4348] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" Namespace="calico-system" Pod="csi-node-driver-9swvw" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"707e0f61-36a7-4b17-96a3-b98a0613f9bd", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170", Pod:"csi-node-driver-9swvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie446ef15128", MAC:"3a:47:6c:cf:cf:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:19.066066 containerd[1463]: 2025-08-13 07:15:19.061 [INFO][4348] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170" Namespace="calico-system" Pod="csi-node-driver-9swvw" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:19.094773 containerd[1463]: time="2025-08-13T07:15:19.094600357Z" level=info msg="StartContainer for \"03edabfa1023c6241a0edebb47fa2268d7eaf1cadf6aa6b06d02db8bffd15322\" returns successfully" Aug 13 07:15:19.115810 containerd[1463]: time="2025-08-13T07:15:19.115625484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:19.117504 containerd[1463]: time="2025-08-13T07:15:19.116720045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:19.117504 containerd[1463]: time="2025-08-13T07:15:19.116812233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:19.117504 containerd[1463]: time="2025-08-13T07:15:19.116994892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:19.159740 systemd[1]: Started cri-containerd-98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170.scope - libcontainer container 98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170. Aug 13 07:15:19.217190 containerd[1463]: time="2025-08-13T07:15:19.217075960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9swvw,Uid:707e0f61-36a7-4b17-96a3-b98a0613f9bd,Namespace:calico-system,Attempt:1,} returns sandbox id \"98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170\"" Aug 13 07:15:19.420822 systemd-networkd[1378]: calif803dbe35f4: Gained IPv6LL Aug 13 07:15:19.777889 kubelet[2544]: I0813 07:15:19.775622 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6f8c6b6676-nbjbm" podStartSLOduration=2.613657285 podStartE2EDuration="6.775593403s" podCreationTimestamp="2025-08-13 07:15:13 +0000 UTC" firstStartedPulling="2025-08-13 07:15:14.687195713 +0000 UTC m=+42.377816928" lastFinishedPulling="2025-08-13 07:15:18.849131838 +0000 UTC m=+46.539753046" observedRunningTime="2025-08-13 07:15:19.7738048 +0000 UTC m=+47.464426029" watchObservedRunningTime="2025-08-13 07:15:19.775593403 +0000 UTC m=+47.466214633" Aug 13 07:15:20.144622 kubelet[2544]: I0813 07:15:20.144155 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:15:20.466512 containerd[1463]: time="2025-08-13T07:15:20.465705093Z" level=info msg="StopPodSandbox for \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\"" Aug 13 07:15:20.468486 containerd[1463]: time="2025-08-13T07:15:20.467874512Z" level=info msg="StopPodSandbox for \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\"" Aug 13 07:15:20.490166 containerd[1463]: time="2025-08-13T07:15:20.489806383Z" level=info msg="StopPodSandbox for \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\"" Aug 13 07:15:20.505002 containerd[1463]: time="2025-08-13T07:15:20.504940820Z" level=info msg="StopPodSandbox for \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\"" Aug 13 07:15:21.024882 containerd[1463]: 2025-08-13 07:15:20.751 [INFO][4527] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Aug 13 07:15:21.024882 containerd[1463]: 2025-08-13 07:15:20.754 [INFO][4527] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" iface="eth0" netns="/var/run/netns/cni-bf29b2f7-7939-cf52-754f-9d1d3bd011de" Aug 13 07:15:21.024882 containerd[1463]: 2025-08-13 07:15:20.759 [INFO][4527] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" iface="eth0" netns="/var/run/netns/cni-bf29b2f7-7939-cf52-754f-9d1d3bd011de" Aug 13 07:15:21.024882 containerd[1463]: 2025-08-13 07:15:20.765 [INFO][4527] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" iface="eth0" netns="/var/run/netns/cni-bf29b2f7-7939-cf52-754f-9d1d3bd011de" Aug 13 07:15:21.024882 containerd[1463]: 2025-08-13 07:15:20.766 [INFO][4527] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Aug 13 07:15:21.024882 containerd[1463]: 2025-08-13 07:15:20.766 [INFO][4527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Aug 13 07:15:21.024882 containerd[1463]: 2025-08-13 07:15:20.965 [INFO][4570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" HandleID="k8s-pod-network.73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:21.024882 containerd[1463]: 2025-08-13 07:15:20.973 [INFO][4570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:21.024882 containerd[1463]: 2025-08-13 07:15:20.973 [INFO][4570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:21.024882 containerd[1463]: 2025-08-13 07:15:20.996 [WARNING][4570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" HandleID="k8s-pod-network.73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:21.024882 containerd[1463]: 2025-08-13 07:15:20.996 [INFO][4570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" HandleID="k8s-pod-network.73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:21.024882 containerd[1463]: 2025-08-13 07:15:21.003 [INFO][4570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:21.024882 containerd[1463]: 2025-08-13 07:15:21.010 [INFO][4527] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Aug 13 07:15:21.033863 containerd[1463]: time="2025-08-13T07:15:21.031766847Z" level=info msg="TearDown network for sandbox \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\" successfully" Aug 13 07:15:21.033863 containerd[1463]: time="2025-08-13T07:15:21.031819539Z" level=info msg="StopPodSandbox for \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\" returns successfully" Aug 13 07:15:21.035375 systemd[1]: run-netns-cni\x2dbf29b2f7\x2d7939\x2dcf52\x2d754f\x2d9d1d3bd011de.mount: Deactivated successfully. Aug 13 07:15:21.046933 containerd[1463]: time="2025-08-13T07:15:21.044535508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kc2rx,Uid:2016744c-fc88-45ab-9b7e-62d31ebc5f72,Namespace:calico-system,Attempt:1,}" Aug 13 07:15:21.069493 containerd[1463]: 2025-08-13 07:15:20.794 [INFO][4544] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Aug 13 07:15:21.069493 containerd[1463]: 2025-08-13 07:15:20.794 [INFO][4544] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" iface="eth0" netns="/var/run/netns/cni-70514ade-6f39-dd56-47d3-7640ca250a8e" Aug 13 07:15:21.069493 containerd[1463]: 2025-08-13 07:15:20.795 [INFO][4544] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" iface="eth0" netns="/var/run/netns/cni-70514ade-6f39-dd56-47d3-7640ca250a8e" Aug 13 07:15:21.069493 containerd[1463]: 2025-08-13 07:15:20.802 [INFO][4544] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" iface="eth0" netns="/var/run/netns/cni-70514ade-6f39-dd56-47d3-7640ca250a8e" Aug 13 07:15:21.069493 containerd[1463]: 2025-08-13 07:15:20.802 [INFO][4544] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Aug 13 07:15:21.069493 containerd[1463]: 2025-08-13 07:15:20.802 [INFO][4544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Aug 13 07:15:21.069493 containerd[1463]: 2025-08-13 07:15:21.005 [INFO][4578] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" HandleID="k8s-pod-network.f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:21.069493 containerd[1463]: 2025-08-13 07:15:21.009 [INFO][4578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:21.069493 containerd[1463]: 2025-08-13 07:15:21.009 [INFO][4578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:21.069493 containerd[1463]: 2025-08-13 07:15:21.055 [WARNING][4578] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" HandleID="k8s-pod-network.f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:21.069493 containerd[1463]: 2025-08-13 07:15:21.055 [INFO][4578] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" HandleID="k8s-pod-network.f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:21.069493 containerd[1463]: 2025-08-13 07:15:21.061 [INFO][4578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:21.069493 containerd[1463]: 2025-08-13 07:15:21.063 [INFO][4544] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Aug 13 07:15:21.074260 containerd[1463]: time="2025-08-13T07:15:21.072913006Z" level=info msg="TearDown network for sandbox \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\" successfully" Aug 13 07:15:21.076074 containerd[1463]: time="2025-08-13T07:15:21.075934947Z" level=info msg="StopPodSandbox for \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\" returns successfully" Aug 13 07:15:21.081130 systemd[1]: run-netns-cni\x2d70514ade\x2d6f39\x2ddd56\x2d47d3\x2d7640ca250a8e.mount: Deactivated successfully. Aug 13 07:15:21.085536 systemd-networkd[1378]: calie446ef15128: Gained IPv6LL Aug 13 07:15:21.089205 containerd[1463]: time="2025-08-13T07:15:21.087815094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sbmzm,Uid:392a02f5-e055-4b6b-9098-a7e0976f9e04,Namespace:kube-system,Attempt:1,}" Aug 13 07:15:21.184353 containerd[1463]: 2025-08-13 07:15:20.854 [INFO][4524] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Aug 13 07:15:21.184353 containerd[1463]: 2025-08-13 07:15:20.858 [INFO][4524] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" iface="eth0" netns="/var/run/netns/cni-0d25e296-451d-99fd-f2c7-0b4cf8eef2bb" Aug 13 07:15:21.184353 containerd[1463]: 2025-08-13 07:15:20.861 [INFO][4524] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" iface="eth0" netns="/var/run/netns/cni-0d25e296-451d-99fd-f2c7-0b4cf8eef2bb" Aug 13 07:15:21.184353 containerd[1463]: 2025-08-13 07:15:20.861 [INFO][4524] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" iface="eth0" netns="/var/run/netns/cni-0d25e296-451d-99fd-f2c7-0b4cf8eef2bb" Aug 13 07:15:21.184353 containerd[1463]: 2025-08-13 07:15:20.861 [INFO][4524] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Aug 13 07:15:21.184353 containerd[1463]: 2025-08-13 07:15:20.861 [INFO][4524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Aug 13 07:15:21.184353 containerd[1463]: 2025-08-13 07:15:21.090 [INFO][4590] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" HandleID="k8s-pod-network.e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:21.184353 containerd[1463]: 2025-08-13 07:15:21.100 [INFO][4590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:21.184353 containerd[1463]: 2025-08-13 07:15:21.100 [INFO][4590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:21.184353 containerd[1463]: 2025-08-13 07:15:21.148 [WARNING][4590] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" HandleID="k8s-pod-network.e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:21.184353 containerd[1463]: 2025-08-13 07:15:21.148 [INFO][4590] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" HandleID="k8s-pod-network.e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:21.184353 containerd[1463]: 2025-08-13 07:15:21.155 [INFO][4590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:21.184353 containerd[1463]: 2025-08-13 07:15:21.170 [INFO][4524] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Aug 13 07:15:21.197952 containerd[1463]: time="2025-08-13T07:15:21.196578326Z" level=info msg="TearDown network for sandbox \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\" successfully" Aug 13 07:15:21.197952 containerd[1463]: time="2025-08-13T07:15:21.196622826Z" level=info msg="StopPodSandbox for \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\" returns successfully" Aug 13 07:15:21.199426 containerd[1463]: time="2025-08-13T07:15:21.199383298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d84b9fbf7-fwlqp,Uid:ec5447bc-9529-4230-8377-9fbb134088cb,Namespace:calico-system,Attempt:1,}" Aug 13 07:15:21.306786 containerd[1463]: 2025-08-13 07:15:20.894 [INFO][4543] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Aug 13 07:15:21.306786 containerd[1463]: 2025-08-13 07:15:20.897 [INFO][4543] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" iface="eth0" netns="/var/run/netns/cni-923ba2e3-c21f-93f7-dea2-63f16d492057" Aug 13 07:15:21.306786 containerd[1463]: 2025-08-13 07:15:20.898 [INFO][4543] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" iface="eth0" netns="/var/run/netns/cni-923ba2e3-c21f-93f7-dea2-63f16d492057" Aug 13 07:15:21.306786 containerd[1463]: 2025-08-13 07:15:20.912 [INFO][4543] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" iface="eth0" netns="/var/run/netns/cni-923ba2e3-c21f-93f7-dea2-63f16d492057" Aug 13 07:15:21.306786 containerd[1463]: 2025-08-13 07:15:20.912 [INFO][4543] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Aug 13 07:15:21.306786 containerd[1463]: 2025-08-13 07:15:20.913 [INFO][4543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Aug 13 07:15:21.306786 containerd[1463]: 2025-08-13 07:15:21.216 [INFO][4596] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" HandleID="k8s-pod-network.320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:21.306786 containerd[1463]: 2025-08-13 07:15:21.230 [INFO][4596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:21.306786 containerd[1463]: 2025-08-13 07:15:21.230 [INFO][4596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:21.306786 containerd[1463]: 2025-08-13 07:15:21.259 [WARNING][4596] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" HandleID="k8s-pod-network.320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:21.306786 containerd[1463]: 2025-08-13 07:15:21.259 [INFO][4596] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" HandleID="k8s-pod-network.320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:21.306786 containerd[1463]: 2025-08-13 07:15:21.266 [INFO][4596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:21.306786 containerd[1463]: 2025-08-13 07:15:21.289 [INFO][4543] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Aug 13 07:15:21.311017 containerd[1463]: time="2025-08-13T07:15:21.307228454Z" level=info msg="TearDown network for sandbox \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\" successfully" Aug 13 07:15:21.311017 containerd[1463]: time="2025-08-13T07:15:21.307265941Z" level=info msg="StopPodSandbox for \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\" returns successfully" Aug 13 07:15:21.311017 containerd[1463]: time="2025-08-13T07:15:21.308236239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694b6c886d-h85bl,Uid:d3025ee2-b9db-46e5-ad3b-6d08aebd9c73,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:15:21.738404 systemd-networkd[1378]: cali404224b6227: Link UP Aug 13 07:15:21.750339 systemd-networkd[1378]: cali404224b6227: Gained carrier Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.318 [INFO][4627] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.367 [INFO][4627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0 coredns-7c65d6cfc9- kube-system 392a02f5-e055-4b6b-9098-a7e0976f9e04 983 0 2025-08-13 07:14:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal coredns-7c65d6cfc9-sbmzm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali404224b6227 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbmzm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-" Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.368 [INFO][4627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbmzm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.589 [INFO][4660] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" HandleID="k8s-pod-network.d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.592 [INFO][4660] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" HandleID="k8s-pod-network.d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042eeb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", "pod":"coredns-7c65d6cfc9-sbmzm", "timestamp":"2025-08-13 07:15:21.58971445 +0000 UTC"}, Hostname:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.592 [INFO][4660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.592 [INFO][4660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.592 [INFO][4660] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.622 [INFO][4660] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.640 [INFO][4660] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.659 [INFO][4660] ipam/ipam.go 511: Trying affinity for 192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.665 [INFO][4660] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.672 [INFO][4660] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.673 [INFO][4660] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.128/26 handle="k8s-pod-network.d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.677 [INFO][4660] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.687 [INFO][4660] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.128/26 handle="k8s-pod-network.d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.708 [INFO][4660] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.133/26] block=192.168.94.128/26 handle="k8s-pod-network.d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.709 [INFO][4660] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.133/26] handle="k8s-pod-network.d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.710 [INFO][4660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:21.801690 containerd[1463]: 2025-08-13 07:15:21.710 [INFO][4660] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.133/26] IPv6=[] ContainerID="d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" HandleID="k8s-pod-network.d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:21.804699 containerd[1463]: 2025-08-13 07:15:21.725 [INFO][4627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbmzm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"392a02f5-e055-4b6b-9098-a7e0976f9e04", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7c65d6cfc9-sbmzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali404224b6227", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:21.804699 containerd[1463]: 2025-08-13 07:15:21.725 [INFO][4627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.133/32] ContainerID="d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbmzm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:21.804699 containerd[1463]: 2025-08-13 07:15:21.725 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali404224b6227 ContainerID="d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbmzm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:21.804699 containerd[1463]: 2025-08-13 07:15:21.751 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbmzm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:21.804699 containerd[1463]: 2025-08-13 07:15:21.754 [INFO][4627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbmzm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"392a02f5-e055-4b6b-9098-a7e0976f9e04", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a", Pod:"coredns-7c65d6cfc9-sbmzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali404224b6227", MAC:"ae:d2:46:99:10:8a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:21.804699 containerd[1463]: 2025-08-13 07:15:21.788 [INFO][4627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbmzm" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:21.890334 systemd-networkd[1378]: calidfd39197ea0: Link UP Aug 13 07:15:21.892652 systemd-networkd[1378]: calidfd39197ea0: Gained carrier Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.393 [INFO][4637] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.446 [INFO][4637] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0 calico-kube-controllers-d84b9fbf7- calico-system ec5447bc-9529-4230-8377-9fbb134088cb 984 0 2025-08-13 07:14:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d84b9fbf7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal calico-kube-controllers-d84b9fbf7-fwlqp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidfd39197ea0 [] [] }} ContainerID="2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" Namespace="calico-system" Pod="calico-kube-controllers-d84b9fbf7-fwlqp" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-" Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.447 [INFO][4637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" Namespace="calico-system" Pod="calico-kube-controllers-d84b9fbf7-fwlqp" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.668 [INFO][4671] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" HandleID="k8s-pod-network.2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.669 [INFO][4671] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" HandleID="k8s-pod-network.2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333990), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", "pod":"calico-kube-controllers-d84b9fbf7-fwlqp", "timestamp":"2025-08-13 07:15:21.668706153 +0000 UTC"}, Hostname:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.669 [INFO][4671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.710 [INFO][4671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.715 [INFO][4671] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.766 [INFO][4671] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.799 [INFO][4671] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.812 [INFO][4671] ipam/ipam.go 511: Trying affinity for 192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.819 [INFO][4671] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.827 [INFO][4671] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.828 [INFO][4671] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.128/26 handle="k8s-pod-network.2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.833 [INFO][4671] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56 Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.845 [INFO][4671] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.128/26 handle="k8s-pod-network.2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.866 [INFO][4671] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.134/26] block=192.168.94.128/26 handle="k8s-pod-network.2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.866 [INFO][4671] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.134/26] handle="k8s-pod-network.2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.866 [INFO][4671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:21.940971 containerd[1463]: 2025-08-13 07:15:21.866 [INFO][4671] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.134/26] IPv6=[] ContainerID="2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" HandleID="k8s-pod-network.2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:21.942314 containerd[1463]: 2025-08-13 07:15:21.875 [INFO][4637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" Namespace="calico-system" Pod="calico-kube-controllers-d84b9fbf7-fwlqp" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0", GenerateName:"calico-kube-controllers-d84b9fbf7-", Namespace:"calico-system", SelfLink:"", UID:"ec5447bc-9529-4230-8377-9fbb134088cb", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d84b9fbf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-d84b9fbf7-fwlqp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidfd39197ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:21.942314 containerd[1463]: 2025-08-13 07:15:21.875 [INFO][4637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.134/32] ContainerID="2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" Namespace="calico-system" Pod="calico-kube-controllers-d84b9fbf7-fwlqp" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:21.942314 containerd[1463]: 2025-08-13 07:15:21.875 [INFO][4637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidfd39197ea0 ContainerID="2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" Namespace="calico-system" Pod="calico-kube-controllers-d84b9fbf7-fwlqp" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:21.942314 containerd[1463]: 2025-08-13 07:15:21.896 [INFO][4637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" Namespace="calico-system" Pod="calico-kube-controllers-d84b9fbf7-fwlqp" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:21.942314 containerd[1463]: 2025-08-13 07:15:21.898 [INFO][4637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" Namespace="calico-system" Pod="calico-kube-controllers-d84b9fbf7-fwlqp" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0", GenerateName:"calico-kube-controllers-d84b9fbf7-", Namespace:"calico-system", SelfLink:"", UID:"ec5447bc-9529-4230-8377-9fbb134088cb", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d84b9fbf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56", Pod:"calico-kube-controllers-d84b9fbf7-fwlqp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidfd39197ea0", MAC:"9a:c2:a4:6a:5f:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:21.942314 containerd[1463]: 2025-08-13 07:15:21.933 [INFO][4637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56" Namespace="calico-system" Pod="calico-kube-controllers-d84b9fbf7-fwlqp" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:21.950689 kernel: bpftool[4724]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:15:21.956580 containerd[1463]: time="2025-08-13T07:15:21.955396906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:21.956580 containerd[1463]: time="2025-08-13T07:15:21.956350616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:21.956580 containerd[1463]: time="2025-08-13T07:15:21.956391893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:21.958485 containerd[1463]: time="2025-08-13T07:15:21.958148457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:22.048724 systemd-networkd[1378]: calia0ce0480fd2: Link UP Aug 13 07:15:22.057712 systemd-networkd[1378]: calia0ce0480fd2: Gained carrier Aug 13 07:15:22.063683 systemd[1]: run-netns-cni\x2d0d25e296\x2d451d\x2d99fd\x2df2c7\x2d0b4cf8eef2bb.mount: Deactivated successfully. Aug 13 07:15:22.063829 systemd[1]: run-netns-cni\x2d923ba2e3\x2dc21f\x2d93f7\x2ddea2\x2d63f16d492057.mount: Deactivated successfully. Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.508 [INFO][4650] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.567 [INFO][4650] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0 calico-apiserver-694b6c886d- calico-apiserver d3025ee2-b9db-46e5-ad3b-6d08aebd9c73 985 0 2025-08-13 07:14:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:694b6c886d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal calico-apiserver-694b6c886d-h85bl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia0ce0480fd2 [] [] }} ContainerID="9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-h85bl" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-" Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.567 [INFO][4650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-h85bl" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.776 [INFO][4688] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" HandleID="k8s-pod-network.9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.777 [INFO][4688] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" HandleID="k8s-pod-network.9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030b060), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", "pod":"calico-apiserver-694b6c886d-h85bl", "timestamp":"2025-08-13 07:15:21.776621779 +0000 UTC"}, Hostname:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.777 [INFO][4688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.868 [INFO][4688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.869 [INFO][4688] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.914 [INFO][4688] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.938 [INFO][4688] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.956 [INFO][4688] ipam/ipam.go 511: Trying affinity for 192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.961 [INFO][4688] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.977 [INFO][4688] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.977 [INFO][4688] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.128/26 handle="k8s-pod-network.9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.981 [INFO][4688] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:21.993 [INFO][4688] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.128/26 handle="k8s-pod-network.9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:22.012 [INFO][4688] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.135/26] block=192.168.94.128/26 handle="k8s-pod-network.9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:22.012 [INFO][4688] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.135/26] handle="k8s-pod-network.9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:22.012 [INFO][4688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:22.121505 containerd[1463]: 2025-08-13 07:15:22.012 [INFO][4688] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.135/26] IPv6=[] ContainerID="9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" HandleID="k8s-pod-network.9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:22.122983 containerd[1463]: 2025-08-13 07:15:22.023 [INFO][4650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-h85bl" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0", GenerateName:"calico-apiserver-694b6c886d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3025ee2-b9db-46e5-ad3b-6d08aebd9c73", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694b6c886d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-694b6c886d-h85bl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia0ce0480fd2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:22.122983 containerd[1463]: 2025-08-13 07:15:22.028 [INFO][4650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.135/32] ContainerID="9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-h85bl" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:22.122983 containerd[1463]: 2025-08-13 07:15:22.031 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0ce0480fd2 ContainerID="9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-h85bl" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:22.122983 containerd[1463]: 2025-08-13 07:15:22.053 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-h85bl" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:22.122983 containerd[1463]: 2025-08-13 07:15:22.058 [INFO][4650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-h85bl" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0", GenerateName:"calico-apiserver-694b6c886d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3025ee2-b9db-46e5-ad3b-6d08aebd9c73", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694b6c886d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac", Pod:"calico-apiserver-694b6c886d-h85bl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia0ce0480fd2", MAC:"a6:01:8b:43:bc:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:22.122983 containerd[1463]: 2025-08-13 07:15:22.105 [INFO][4650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac" Namespace="calico-apiserver" Pod="calico-apiserver-694b6c886d-h85bl" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:22.145272 systemd[1]: Started cri-containerd-d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a.scope - libcontainer container d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a. Aug 13 07:15:22.205959 containerd[1463]: time="2025-08-13T07:15:22.196111370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:22.205959 containerd[1463]: time="2025-08-13T07:15:22.204635518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:22.205959 containerd[1463]: time="2025-08-13T07:15:22.204689807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:22.205959 containerd[1463]: time="2025-08-13T07:15:22.204835568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:22.282731 containerd[1463]: time="2025-08-13T07:15:22.280033937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:22.282731 containerd[1463]: time="2025-08-13T07:15:22.280252871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:22.282731 containerd[1463]: time="2025-08-13T07:15:22.280646823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:22.285681 systemd[1]: Started cri-containerd-2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56.scope - libcontainer container 2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56. Aug 13 07:15:22.288972 containerd[1463]: time="2025-08-13T07:15:22.283716978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:22.309503 systemd-networkd[1378]: cali1aaf8264355: Link UP Aug 13 07:15:22.312574 systemd-networkd[1378]: cali1aaf8264355: Gained carrier Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:21.449 [INFO][4613] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:21.509 [INFO][4613] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0 goldmane-58fd7646b9- calico-system 2016744c-fc88-45ab-9b7e-62d31ebc5f72 981 0 2025-08-13 07:14:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal goldmane-58fd7646b9-kc2rx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1aaf8264355 [] [] }} ContainerID="73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kc2rx" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-" Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:21.509 [INFO][4613] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kc2rx" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:21.797 [INFO][4682] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" HandleID="k8s-pod-network.73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:21.798 [INFO][4682] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" HandleID="k8s-pod-network.73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", "pod":"goldmane-58fd7646b9-kc2rx", "timestamp":"2025-08-13 07:15:21.79778774 +0000 UTC"}, Hostname:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:21.798 [INFO][4682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:22.013 [INFO][4682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:22.013 [INFO][4682] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal' Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:22.103 [INFO][4682] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:22.123 [INFO][4682] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:22.162 [INFO][4682] ipam/ipam.go 511: Trying affinity for 192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:22.167 [INFO][4682] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:22.177 [INFO][4682] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.128/26 host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:22.177 [INFO][4682] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.128/26 handle="k8s-pod-network.73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:22.188 [INFO][4682] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:22.197 [INFO][4682] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.128/26 handle="k8s-pod-network.73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:22.241 [INFO][4682] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.136/26] block=192.168.94.128/26 handle="k8s-pod-network.73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:22.241 [INFO][4682] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.136/26] handle="k8s-pod-network.73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" host="ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal" Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:22.241 [INFO][4682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:22.389790 containerd[1463]: 2025-08-13 07:15:22.241 [INFO][4682] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.136/26] IPv6=[] ContainerID="73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" HandleID="k8s-pod-network.73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:22.393583 containerd[1463]: 2025-08-13 07:15:22.271 [INFO][4613] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kc2rx" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2016744c-fc88-45ab-9b7e-62d31ebc5f72", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-58fd7646b9-kc2rx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.94.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1aaf8264355", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:22.393583 containerd[1463]: 2025-08-13 07:15:22.272 [INFO][4613] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.136/32] ContainerID="73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kc2rx" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:22.393583 containerd[1463]: 2025-08-13 07:15:22.272 [INFO][4613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1aaf8264355 ContainerID="73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kc2rx" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:22.393583 containerd[1463]: 2025-08-13 07:15:22.319 [INFO][4613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kc2rx" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:22.393583 containerd[1463]: 2025-08-13 07:15:22.343 [INFO][4613] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kc2rx" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2016744c-fc88-45ab-9b7e-62d31ebc5f72", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd", Pod:"goldmane-58fd7646b9-kc2rx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.94.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1aaf8264355", MAC:"fa:3a:5f:ce:8f:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:22.393583 containerd[1463]: 2025-08-13 07:15:22.374 [INFO][4613] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kc2rx" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:22.421785 containerd[1463]: time="2025-08-13T07:15:22.421380169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sbmzm,Uid:392a02f5-e055-4b6b-9098-a7e0976f9e04,Namespace:kube-system,Attempt:1,} returns sandbox id \"d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a\"" Aug 13 07:15:22.443258 containerd[1463]: time="2025-08-13T07:15:22.443173699Z" level=info msg="CreateContainer within sandbox \"d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:15:22.480738 systemd[1]: Started cri-containerd-9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac.scope - libcontainer container 9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac. Aug 13 07:15:22.523434 containerd[1463]: time="2025-08-13T07:15:22.523118562Z" level=info msg="CreateContainer within sandbox \"d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9faf3f2bec6fbadb2b5eccabc414136facd1a0e87471666d59fe59c6bd7f6fc2\"" Aug 13 07:15:22.527592 containerd[1463]: time="2025-08-13T07:15:22.527520345Z" level=info msg="StartContainer for \"9faf3f2bec6fbadb2b5eccabc414136facd1a0e87471666d59fe59c6bd7f6fc2\"" Aug 13 07:15:22.533489 containerd[1463]: time="2025-08-13T07:15:22.530923153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:22.533489 containerd[1463]: time="2025-08-13T07:15:22.531632574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:22.533489 containerd[1463]: time="2025-08-13T07:15:22.531812863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:22.533489 containerd[1463]: time="2025-08-13T07:15:22.532685274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:22.574734 systemd[1]: Started cri-containerd-73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd.scope - libcontainer container 73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd. Aug 13 07:15:22.684766 systemd[1]: Started cri-containerd-9faf3f2bec6fbadb2b5eccabc414136facd1a0e87471666d59fe59c6bd7f6fc2.scope - libcontainer container 9faf3f2bec6fbadb2b5eccabc414136facd1a0e87471666d59fe59c6bd7f6fc2. Aug 13 07:15:22.759501 containerd[1463]: time="2025-08-13T07:15:22.759353125Z" level=info msg="StartContainer for \"9faf3f2bec6fbadb2b5eccabc414136facd1a0e87471666d59fe59c6bd7f6fc2\" returns successfully" Aug 13 07:15:22.811499 kubelet[2544]: I0813 07:15:22.810693 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sbmzm" podStartSLOduration=44.810665327 podStartE2EDuration="44.810665327s" podCreationTimestamp="2025-08-13 07:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:15:22.809136053 +0000 UTC m=+50.499757283" watchObservedRunningTime="2025-08-13 07:15:22.810665327 +0000 UTC m=+50.501287074" Aug 13 07:15:22.908971 containerd[1463]: time="2025-08-13T07:15:22.907732231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kc2rx,Uid:2016744c-fc88-45ab-9b7e-62d31ebc5f72,Namespace:calico-system,Attempt:1,} returns sandbox id \"73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd\"" Aug 13 07:15:22.978149 containerd[1463]: time="2025-08-13T07:15:22.978004568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d84b9fbf7-fwlqp,Uid:ec5447bc-9529-4230-8377-9fbb134088cb,Namespace:calico-system,Attempt:1,} returns sandbox id \"2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56\"" Aug 13 07:15:23.152301 containerd[1463]: time="2025-08-13T07:15:23.152080389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694b6c886d-h85bl,Uid:d3025ee2-b9db-46e5-ad3b-6d08aebd9c73,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac\"" Aug 13 07:15:23.198624 systemd-networkd[1378]: calidfd39197ea0: Gained IPv6LL Aug 13 07:15:23.644852 systemd-networkd[1378]: calia0ce0480fd2: Gained IPv6LL Aug 13 07:15:23.645357 systemd-networkd[1378]: cali1aaf8264355: Gained IPv6LL Aug 13 07:15:23.709904 systemd-networkd[1378]: cali404224b6227: Gained IPv6LL Aug 13 07:15:23.840754 systemd-networkd[1378]: vxlan.calico: Link UP Aug 13 07:15:23.840769 systemd-networkd[1378]: vxlan.calico: Gained carrier Aug 13 07:15:24.508375 containerd[1463]: time="2025-08-13T07:15:24.508312464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:24.511815 containerd[1463]: time="2025-08-13T07:15:24.511750437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:15:24.513196 containerd[1463]: time="2025-08-13T07:15:24.513152125Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:24.524527 containerd[1463]: time="2025-08-13T07:15:24.524435793Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.67399317s" Aug 13 07:15:24.524527 containerd[1463]: time="2025-08-13T07:15:24.524531015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:15:24.525497 containerd[1463]: time="2025-08-13T07:15:24.524807599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:24.529768 containerd[1463]: time="2025-08-13T07:15:24.529721021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:15:24.532209 containerd[1463]: time="2025-08-13T07:15:24.532165364Z" level=info msg="CreateContainer within sandbox \"c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:15:24.558005 containerd[1463]: time="2025-08-13T07:15:24.557851797Z" level=info msg="CreateContainer within sandbox \"c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"72e25ce2a7a36059caff89fd89e752dd2b9fda11892273c8e537b31db291bf56\"" Aug 13 07:15:24.559609 containerd[1463]: time="2025-08-13T07:15:24.559317719Z" level=info msg="StartContainer for \"72e25ce2a7a36059caff89fd89e752dd2b9fda11892273c8e537b31db291bf56\"" Aug 13 07:15:24.629691 systemd[1]: Started cri-containerd-72e25ce2a7a36059caff89fd89e752dd2b9fda11892273c8e537b31db291bf56.scope - libcontainer container 72e25ce2a7a36059caff89fd89e752dd2b9fda11892273c8e537b31db291bf56. Aug 13 07:15:24.722615 containerd[1463]: time="2025-08-13T07:15:24.722560428Z" level=info msg="StartContainer for \"72e25ce2a7a36059caff89fd89e752dd2b9fda11892273c8e537b31db291bf56\" returns successfully" Aug 13 07:15:25.629969 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL Aug 13 07:15:25.825891 kubelet[2544]: I0813 07:15:25.825846 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:15:25.827505 containerd[1463]: time="2025-08-13T07:15:25.826591081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:25.828686 containerd[1463]: time="2025-08-13T07:15:25.828635839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:15:25.830731 containerd[1463]: time="2025-08-13T07:15:25.830689163Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:25.835491 containerd[1463]: time="2025-08-13T07:15:25.834539748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:25.836741 containerd[1463]: time="2025-08-13T07:15:25.836699538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.306932053s" Aug 13 07:15:25.836854 containerd[1463]: time="2025-08-13T07:15:25.836745576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:15:25.838986 containerd[1463]: time="2025-08-13T07:15:25.838728437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:15:25.841953 containerd[1463]: time="2025-08-13T07:15:25.841181349Z" level=info msg="CreateContainer within sandbox \"98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:15:25.870503 containerd[1463]: time="2025-08-13T07:15:25.868376906Z" level=info msg="CreateContainer within sandbox \"98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"74c39f43fbed483df16e937e511ba03b33fb99121de5b3cedb671d3f5f8b243a\"" Aug 13 07:15:25.883205 containerd[1463]: time="2025-08-13T07:15:25.883055372Z" level=info msg="StartContainer for \"74c39f43fbed483df16e937e511ba03b33fb99121de5b3cedb671d3f5f8b243a\"" Aug 13 07:15:25.888849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount469113494.mount: Deactivated successfully. Aug 13 07:15:25.956718 systemd[1]: Started cri-containerd-74c39f43fbed483df16e937e511ba03b33fb99121de5b3cedb671d3f5f8b243a.scope - libcontainer container 74c39f43fbed483df16e937e511ba03b33fb99121de5b3cedb671d3f5f8b243a. Aug 13 07:15:26.019710 containerd[1463]: time="2025-08-13T07:15:26.019629724Z" level=info msg="StartContainer for \"74c39f43fbed483df16e937e511ba03b33fb99121de5b3cedb671d3f5f8b243a\" returns successfully" Aug 13 07:15:28.488769 ntpd[1432]: Listen normally on 7 vxlan.calico 192.168.94.128:123 Aug 13 07:15:28.488900 ntpd[1432]: Listen normally on 8 cali427eb7a702d [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 07:15:28.489836 ntpd[1432]: 13 Aug 07:15:28 ntpd[1432]: Listen normally on 7 vxlan.calico 192.168.94.128:123 Aug 13 07:15:28.489836 ntpd[1432]: 13 Aug 07:15:28 ntpd[1432]: Listen normally on 8 cali427eb7a702d [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 07:15:28.489836 ntpd[1432]: 13 Aug 07:15:28 ntpd[1432]: Listen normally on 9 cali30a2d438b07 [fe80::ecee:eeff:feee:eeee%5]:123 Aug 13 07:15:28.489836 ntpd[1432]: 13 Aug 07:15:28 ntpd[1432]: Listen normally on 10 calif803dbe35f4 [fe80::ecee:eeff:feee:eeee%6]:123 Aug 13 07:15:28.489836 ntpd[1432]: 13 Aug 07:15:28 ntpd[1432]: Listen normally on 11 calie446ef15128 [fe80::ecee:eeff:feee:eeee%7]:123 Aug 13 07:15:28.489836 ntpd[1432]: 13 Aug 07:15:28 ntpd[1432]: Listen normally on 12 cali404224b6227 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 07:15:28.489836 ntpd[1432]: 13 Aug 07:15:28 ntpd[1432]: Listen normally on 13 calidfd39197ea0 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 07:15:28.489004 ntpd[1432]: Listen normally on 9 cali30a2d438b07 [fe80::ecee:eeff:feee:eeee%5]:123 Aug 13 07:15:28.489066 ntpd[1432]: Listen normally on 10 calif803dbe35f4 [fe80::ecee:eeff:feee:eeee%6]:123 Aug 13 07:15:28.489128 ntpd[1432]: Listen normally on 11 calie446ef15128 [fe80::ecee:eeff:feee:eeee%7]:123 Aug 13 07:15:28.490455 ntpd[1432]: 13 Aug 07:15:28 ntpd[1432]: Listen normally on 14 calia0ce0480fd2 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 07:15:28.490455 ntpd[1432]: 13 Aug 07:15:28 ntpd[1432]: Listen normally on 15 cali1aaf8264355 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 07:15:28.489187 ntpd[1432]: Listen normally on 12 cali404224b6227 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 07:15:28.491024 ntpd[1432]: 13 Aug 07:15:28 ntpd[1432]: Listen normally on 16 vxlan.calico [fe80::64ec:3ff:fe38:fd17%12]:123 Aug 13 07:15:28.489245 ntpd[1432]: Listen normally on 13 calidfd39197ea0 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 07:15:28.490330 ntpd[1432]: Listen normally on 14 calia0ce0480fd2 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 07:15:28.490405 ntpd[1432]: Listen normally on 15 cali1aaf8264355 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 07:15:28.490484 ntpd[1432]: Listen normally on 16 vxlan.calico [fe80::64ec:3ff:fe38:fd17%12]:123 Aug 13 07:15:28.815901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1278731035.mount: Deactivated successfully. Aug 13 07:15:30.224114 containerd[1463]: time="2025-08-13T07:15:30.224038463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:30.225767 containerd[1463]: time="2025-08-13T07:15:30.225698080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:15:30.227040 containerd[1463]: time="2025-08-13T07:15:30.226968597Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:30.230421 containerd[1463]: time="2025-08-13T07:15:30.230372611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:30.232170 containerd[1463]: time="2025-08-13T07:15:30.231988504Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.393219529s" Aug 13 07:15:30.232170 containerd[1463]: time="2025-08-13T07:15:30.232039487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:15:30.234264 containerd[1463]: time="2025-08-13T07:15:30.233613499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:15:30.235515 containerd[1463]: time="2025-08-13T07:15:30.235445172Z" level=info msg="CreateContainer within sandbox \"73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:15:30.256514 containerd[1463]: time="2025-08-13T07:15:30.256317848Z" level=info msg="CreateContainer within sandbox \"73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ad34e6e6597096af05276f298949ab44d3d1a1adabc5dc674d01b4ef226498ce\"" Aug 13 07:15:30.259163 containerd[1463]: time="2025-08-13T07:15:30.258739045Z" level=info msg="StartContainer for \"ad34e6e6597096af05276f298949ab44d3d1a1adabc5dc674d01b4ef226498ce\"" Aug 13 07:15:30.314748 systemd[1]: Started cri-containerd-ad34e6e6597096af05276f298949ab44d3d1a1adabc5dc674d01b4ef226498ce.scope - libcontainer container ad34e6e6597096af05276f298949ab44d3d1a1adabc5dc674d01b4ef226498ce. Aug 13 07:15:30.376555 containerd[1463]: time="2025-08-13T07:15:30.376393331Z" level=info msg="StartContainer for \"ad34e6e6597096af05276f298949ab44d3d1a1adabc5dc674d01b4ef226498ce\" returns successfully" Aug 13 07:15:30.885157 kubelet[2544]: I0813 07:15:30.884976 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-694b6c886d-bxdxg" podStartSLOduration=35.5822317 podStartE2EDuration="41.884949071s" podCreationTimestamp="2025-08-13 07:14:49 +0000 UTC" firstStartedPulling="2025-08-13 07:15:18.225528969 +0000 UTC m=+45.916150181" lastFinishedPulling="2025-08-13 07:15:24.528246333 +0000 UTC m=+52.218867552" observedRunningTime="2025-08-13 07:15:24.848771786 +0000 UTC m=+52.539393014" watchObservedRunningTime="2025-08-13 07:15:30.884949071 +0000 UTC m=+58.575570328" Aug 13 07:15:30.913866 systemd[1]: run-containerd-runc-k8s.io-ad34e6e6597096af05276f298949ab44d3d1a1adabc5dc674d01b4ef226498ce-runc.7hkHRY.mount: Deactivated successfully. Aug 13 07:15:31.027591 kubelet[2544]: I0813 07:15:31.027511 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-kc2rx" podStartSLOduration=30.707569647 podStartE2EDuration="38.026441133s" podCreationTimestamp="2025-08-13 07:14:53 +0000 UTC" firstStartedPulling="2025-08-13 07:15:22.914458274 +0000 UTC m=+50.605079479" lastFinishedPulling="2025-08-13 07:15:30.233329743 +0000 UTC m=+57.923950965" observedRunningTime="2025-08-13 07:15:30.887118233 +0000 UTC m=+58.577739462" watchObservedRunningTime="2025-08-13 07:15:31.026441133 +0000 UTC m=+58.717062365" Aug 13 07:15:32.492494 containerd[1463]: time="2025-08-13T07:15:32.492428939Z" level=info msg="StopPodSandbox for \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\"" Aug 13 07:15:32.681718 containerd[1463]: 2025-08-13 07:15:32.604 [WARNING][5199] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0", GenerateName:"calico-apiserver-694b6c886d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9138eb6-4b16-4ca5-add5-a7780c9f57fc", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694b6c886d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4", Pod:"calico-apiserver-694b6c886d-bxdxg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif803dbe35f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:32.681718 containerd[1463]: 2025-08-13 07:15:32.606 [INFO][5199] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Aug 13 07:15:32.681718 containerd[1463]: 2025-08-13 07:15:32.606 [INFO][5199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" iface="eth0" netns="" Aug 13 07:15:32.681718 containerd[1463]: 2025-08-13 07:15:32.606 [INFO][5199] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Aug 13 07:15:32.681718 containerd[1463]: 2025-08-13 07:15:32.606 [INFO][5199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Aug 13 07:15:32.681718 containerd[1463]: 2025-08-13 07:15:32.659 [INFO][5206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" HandleID="k8s-pod-network.d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:32.681718 containerd[1463]: 2025-08-13 07:15:32.659 [INFO][5206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:32.681718 containerd[1463]: 2025-08-13 07:15:32.659 [INFO][5206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:32.681718 containerd[1463]: 2025-08-13 07:15:32.672 [WARNING][5206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" HandleID="k8s-pod-network.d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:32.681718 containerd[1463]: 2025-08-13 07:15:32.672 [INFO][5206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" HandleID="k8s-pod-network.d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:32.681718 containerd[1463]: 2025-08-13 07:15:32.675 [INFO][5206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:32.681718 containerd[1463]: 2025-08-13 07:15:32.678 [INFO][5199] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Aug 13 07:15:32.681718 containerd[1463]: time="2025-08-13T07:15:32.681567817Z" level=info msg="TearDown network for sandbox \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\" successfully" Aug 13 07:15:32.681718 containerd[1463]: time="2025-08-13T07:15:32.681605948Z" level=info msg="StopPodSandbox for \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\" returns successfully" Aug 13 07:15:32.684080 containerd[1463]: time="2025-08-13T07:15:32.684041258Z" level=info msg="RemovePodSandbox for \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\"" Aug 13 07:15:32.684205 containerd[1463]: time="2025-08-13T07:15:32.684092278Z" level=info msg="Forcibly stopping sandbox \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\"" Aug 13 07:15:32.823112 containerd[1463]: time="2025-08-13T07:15:32.822011748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:32.824360 containerd[1463]: time="2025-08-13T07:15:32.824074377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:15:32.825207 containerd[1463]: time="2025-08-13T07:15:32.825165494Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:32.829748 containerd[1463]: 2025-08-13 07:15:32.759 [WARNING][5221] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0", GenerateName:"calico-apiserver-694b6c886d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9138eb6-4b16-4ca5-add5-a7780c9f57fc", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694b6c886d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"c8a512e72a6b9403c609c28c091189b4882411a7c875cbf42a1573866e937cd4", Pod:"calico-apiserver-694b6c886d-bxdxg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif803dbe35f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:32.829748 containerd[1463]: 2025-08-13 07:15:32.760 [INFO][5221] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Aug 13 07:15:32.829748 containerd[1463]: 2025-08-13 07:15:32.760 [INFO][5221] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" iface="eth0" netns="" Aug 13 07:15:32.829748 containerd[1463]: 2025-08-13 07:15:32.760 [INFO][5221] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Aug 13 07:15:32.829748 containerd[1463]: 2025-08-13 07:15:32.760 [INFO][5221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Aug 13 07:15:32.829748 containerd[1463]: 2025-08-13 07:15:32.812 [INFO][5229] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" HandleID="k8s-pod-network.d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:32.829748 containerd[1463]: 2025-08-13 07:15:32.812 [INFO][5229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:32.829748 containerd[1463]: 2025-08-13 07:15:32.812 [INFO][5229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:32.829748 containerd[1463]: 2025-08-13 07:15:32.821 [WARNING][5229] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" HandleID="k8s-pod-network.d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:32.829748 containerd[1463]: 2025-08-13 07:15:32.822 [INFO][5229] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" HandleID="k8s-pod-network.d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--bxdxg-eth0" Aug 13 07:15:32.829748 containerd[1463]: 2025-08-13 07:15:32.825 [INFO][5229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:32.829748 containerd[1463]: 2025-08-13 07:15:32.827 [INFO][5221] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d" Aug 13 07:15:32.830854 containerd[1463]: time="2025-08-13T07:15:32.829772813Z" level=info msg="TearDown network for sandbox \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\" successfully" Aug 13 07:15:32.833076 containerd[1463]: time="2025-08-13T07:15:32.831285716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:32.833076 containerd[1463]: time="2025-08-13T07:15:32.832578894Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.598923236s" Aug 13 07:15:32.833076 containerd[1463]: time="2025-08-13T07:15:32.832622800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:15:32.834215 containerd[1463]: time="2025-08-13T07:15:32.834180431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:15:32.842600 containerd[1463]: time="2025-08-13T07:15:32.842441127Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:15:32.842944 containerd[1463]: time="2025-08-13T07:15:32.842874336Z" level=info msg="RemovePodSandbox \"d521b71ab62889c74829dda354c71db34007149de97789d319fa744b78d10e8d\" returns successfully" Aug 13 07:15:32.850148 containerd[1463]: time="2025-08-13T07:15:32.849960467Z" level=info msg="StopPodSandbox for \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\"" Aug 13 07:15:32.861516 containerd[1463]: time="2025-08-13T07:15:32.860889808Z" level=info msg="CreateContainer within sandbox \"2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:15:32.890259 containerd[1463]: time="2025-08-13T07:15:32.890210824Z" level=info msg="CreateContainer within sandbox \"2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"09b42d7b94417075c9b30293dc6bfba1d810c57c6b2e63aa8072a46eec3f2f16\"" Aug 13 07:15:32.893980 containerd[1463]: time="2025-08-13T07:15:32.892766096Z" level=info msg="StartContainer for \"09b42d7b94417075c9b30293dc6bfba1d810c57c6b2e63aa8072a46eec3f2f16\"" Aug 13 07:15:32.965880 systemd[1]: Started cri-containerd-09b42d7b94417075c9b30293dc6bfba1d810c57c6b2e63aa8072a46eec3f2f16.scope - libcontainer container 09b42d7b94417075c9b30293dc6bfba1d810c57c6b2e63aa8072a46eec3f2f16. Aug 13 07:15:33.045612 containerd[1463]: 2025-08-13 07:15:32.976 [WARNING][5245] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0", GenerateName:"calico-apiserver-694b6c886d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3025ee2-b9db-46e5-ad3b-6d08aebd9c73", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694b6c886d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac", Pod:"calico-apiserver-694b6c886d-h85bl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia0ce0480fd2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:33.045612 containerd[1463]: 2025-08-13 07:15:32.979 [INFO][5245] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Aug 13 07:15:33.045612 containerd[1463]: 2025-08-13 07:15:32.979 [INFO][5245] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" iface="eth0" netns="" Aug 13 07:15:33.045612 containerd[1463]: 2025-08-13 07:15:32.979 [INFO][5245] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Aug 13 07:15:33.045612 containerd[1463]: 2025-08-13 07:15:32.979 [INFO][5245] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Aug 13 07:15:33.045612 containerd[1463]: 2025-08-13 07:15:33.025 [INFO][5277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" HandleID="k8s-pod-network.320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:33.045612 containerd[1463]: 2025-08-13 07:15:33.025 [INFO][5277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:33.045612 containerd[1463]: 2025-08-13 07:15:33.025 [INFO][5277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:33.045612 containerd[1463]: 2025-08-13 07:15:33.037 [WARNING][5277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" HandleID="k8s-pod-network.320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:33.045612 containerd[1463]: 2025-08-13 07:15:33.038 [INFO][5277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" HandleID="k8s-pod-network.320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:33.045612 containerd[1463]: 2025-08-13 07:15:33.040 [INFO][5277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:33.045612 containerd[1463]: 2025-08-13 07:15:33.043 [INFO][5245] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Aug 13 07:15:33.047319 containerd[1463]: time="2025-08-13T07:15:33.045715797Z" level=info msg="TearDown network for sandbox \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\" successfully" Aug 13 07:15:33.047319 containerd[1463]: time="2025-08-13T07:15:33.045753324Z" level=info msg="StopPodSandbox for \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\" returns successfully" Aug 13 07:15:33.047735 containerd[1463]: time="2025-08-13T07:15:33.047429708Z" level=info msg="RemovePodSandbox for \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\"" Aug 13 07:15:33.047735 containerd[1463]: time="2025-08-13T07:15:33.047523146Z" level=info msg="Forcibly stopping sandbox \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\"" Aug 13 07:15:33.058355 containerd[1463]: time="2025-08-13T07:15:33.058154970Z" level=info msg="StartContainer for \"09b42d7b94417075c9b30293dc6bfba1d810c57c6b2e63aa8072a46eec3f2f16\" returns successfully" Aug 13 07:15:33.138369 containerd[1463]: time="2025-08-13T07:15:33.134665092Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:33.138369 containerd[1463]: time="2025-08-13T07:15:33.136512250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:15:33.147769 containerd[1463]: time="2025-08-13T07:15:33.145738376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 311.510881ms" Aug 13 07:15:33.147769 containerd[1463]: time="2025-08-13T07:15:33.145793877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:15:33.148002 containerd[1463]: time="2025-08-13T07:15:33.147851428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:15:33.151837 containerd[1463]: time="2025-08-13T07:15:33.151786835Z" level=info msg="CreateContainer within sandbox \"9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:15:33.182255 containerd[1463]: time="2025-08-13T07:15:33.181451867Z" level=info msg="CreateContainer within sandbox \"9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7b2640d1617ceff0a66f801ccd00fdfb3883fd5b5b308f6925c1b6b0b7dfdb0c\"" Aug 13 07:15:33.188618 containerd[1463]: time="2025-08-13T07:15:33.186821127Z" level=info msg="StartContainer for \"7b2640d1617ceff0a66f801ccd00fdfb3883fd5b5b308f6925c1b6b0b7dfdb0c\"" Aug 13 07:15:33.215974 containerd[1463]: 2025-08-13 07:15:33.123 [WARNING][5301] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0", GenerateName:"calico-apiserver-694b6c886d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3025ee2-b9db-46e5-ad3b-6d08aebd9c73", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694b6c886d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"9a7b1b5c42701d8c4d0be19111ae8e664df9e402709425b5cc4a1f510d2a12ac", Pod:"calico-apiserver-694b6c886d-h85bl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia0ce0480fd2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:33.215974 containerd[1463]: 2025-08-13 07:15:33.124 [INFO][5301] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Aug 13 07:15:33.215974 containerd[1463]: 2025-08-13 07:15:33.124 [INFO][5301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" iface="eth0" netns="" Aug 13 07:15:33.215974 containerd[1463]: 2025-08-13 07:15:33.124 [INFO][5301] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Aug 13 07:15:33.215974 containerd[1463]: 2025-08-13 07:15:33.124 [INFO][5301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Aug 13 07:15:33.215974 containerd[1463]: 2025-08-13 07:15:33.189 [INFO][5317] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" HandleID="k8s-pod-network.320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:33.215974 containerd[1463]: 2025-08-13 07:15:33.190 [INFO][5317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:33.215974 containerd[1463]: 2025-08-13 07:15:33.190 [INFO][5317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:33.215974 containerd[1463]: 2025-08-13 07:15:33.204 [WARNING][5317] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" HandleID="k8s-pod-network.320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:33.215974 containerd[1463]: 2025-08-13 07:15:33.204 [INFO][5317] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" HandleID="k8s-pod-network.320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--apiserver--694b6c886d--h85bl-eth0" Aug 13 07:15:33.215974 containerd[1463]: 2025-08-13 07:15:33.207 [INFO][5317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:33.215974 containerd[1463]: 2025-08-13 07:15:33.210 [INFO][5301] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d" Aug 13 07:15:33.217989 containerd[1463]: time="2025-08-13T07:15:33.216074249Z" level=info msg="TearDown network for sandbox \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\" successfully" Aug 13 07:15:33.274399 systemd[1]: Started cri-containerd-7b2640d1617ceff0a66f801ccd00fdfb3883fd5b5b308f6925c1b6b0b7dfdb0c.scope - libcontainer container 7b2640d1617ceff0a66f801ccd00fdfb3883fd5b5b308f6925c1b6b0b7dfdb0c. Aug 13 07:15:33.294261 containerd[1463]: time="2025-08-13T07:15:33.292779721Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:15:33.294261 containerd[1463]: time="2025-08-13T07:15:33.292887657Z" level=info msg="RemovePodSandbox \"320c05c8b10ade0053a5de8d132703dee7be120a514c7a28716c30136d32831d\" returns successfully" Aug 13 07:15:33.294261 containerd[1463]: time="2025-08-13T07:15:33.293738655Z" level=info msg="StopPodSandbox for \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\"" Aug 13 07:15:33.407607 containerd[1463]: time="2025-08-13T07:15:33.406730286Z" level=info msg="StartContainer for \"7b2640d1617ceff0a66f801ccd00fdfb3883fd5b5b308f6925c1b6b0b7dfdb0c\" returns successfully" Aug 13 07:15:33.482318 containerd[1463]: 2025-08-13 07:15:33.386 [WARNING][5361] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"227e33f7-b859-4c0c-b764-8eade43b88be", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58", Pod:"coredns-7c65d6cfc9-qt8wq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali30a2d438b07", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:33.482318 containerd[1463]: 2025-08-13 07:15:33.390 [INFO][5361] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Aug 13 07:15:33.482318 containerd[1463]: 2025-08-13 07:15:33.390 [INFO][5361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" iface="eth0" netns="" Aug 13 07:15:33.482318 containerd[1463]: 2025-08-13 07:15:33.391 [INFO][5361] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Aug 13 07:15:33.482318 containerd[1463]: 2025-08-13 07:15:33.391 [INFO][5361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Aug 13 07:15:33.482318 containerd[1463]: 2025-08-13 07:15:33.457 [INFO][5376] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" HandleID="k8s-pod-network.d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:33.482318 containerd[1463]: 2025-08-13 07:15:33.458 [INFO][5376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:33.482318 containerd[1463]: 2025-08-13 07:15:33.458 [INFO][5376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:33.482318 containerd[1463]: 2025-08-13 07:15:33.475 [WARNING][5376] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" HandleID="k8s-pod-network.d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:33.482318 containerd[1463]: 2025-08-13 07:15:33.475 [INFO][5376] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" HandleID="k8s-pod-network.d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:33.482318 containerd[1463]: 2025-08-13 07:15:33.477 [INFO][5376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:33.482318 containerd[1463]: 2025-08-13 07:15:33.480 [INFO][5361] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Aug 13 07:15:33.482318 containerd[1463]: time="2025-08-13T07:15:33.482079235Z" level=info msg="TearDown network for sandbox \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\" successfully" Aug 13 07:15:33.482318 containerd[1463]: time="2025-08-13T07:15:33.482133049Z" level=info msg="StopPodSandbox for \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\" returns successfully" Aug 13 07:15:33.484411 containerd[1463]: time="2025-08-13T07:15:33.483280851Z" level=info msg="RemovePodSandbox for \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\"" Aug 13 07:15:33.484411 containerd[1463]: time="2025-08-13T07:15:33.483325833Z" level=info msg="Forcibly stopping sandbox \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\"" Aug 13 07:15:33.622399 containerd[1463]: 2025-08-13 07:15:33.564 [WARNING][5397] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"227e33f7-b859-4c0c-b764-8eade43b88be", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"0effa2253ee3bd6c7ba96a159391e7db56612ab536b5a5f1bbe5868838483a58", Pod:"coredns-7c65d6cfc9-qt8wq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali30a2d438b07", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:33.622399 containerd[1463]: 2025-08-13 07:15:33.564 [INFO][5397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Aug 13 07:15:33.622399 containerd[1463]: 2025-08-13 07:15:33.564 [INFO][5397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" iface="eth0" netns="" Aug 13 07:15:33.622399 containerd[1463]: 2025-08-13 07:15:33.564 [INFO][5397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Aug 13 07:15:33.622399 containerd[1463]: 2025-08-13 07:15:33.564 [INFO][5397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Aug 13 07:15:33.622399 containerd[1463]: 2025-08-13 07:15:33.604 [INFO][5410] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" HandleID="k8s-pod-network.d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:33.622399 containerd[1463]: 2025-08-13 07:15:33.604 [INFO][5410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:33.622399 containerd[1463]: 2025-08-13 07:15:33.604 [INFO][5410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:33.622399 containerd[1463]: 2025-08-13 07:15:33.615 [WARNING][5410] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" HandleID="k8s-pod-network.d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:33.622399 containerd[1463]: 2025-08-13 07:15:33.616 [INFO][5410] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" HandleID="k8s-pod-network.d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--qt8wq-eth0" Aug 13 07:15:33.622399 containerd[1463]: 2025-08-13 07:15:33.618 [INFO][5410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:33.622399 containerd[1463]: 2025-08-13 07:15:33.620 [INFO][5397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4" Aug 13 07:15:33.624422 containerd[1463]: time="2025-08-13T07:15:33.623644675Z" level=info msg="TearDown network for sandbox \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\" successfully" Aug 13 07:15:33.629940 containerd[1463]: time="2025-08-13T07:15:33.629878077Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:15:33.630266 containerd[1463]: time="2025-08-13T07:15:33.630149518Z" level=info msg="RemovePodSandbox \"d5ad6840053bf770cd53eb9d1eb7041ef694749791482003b0b577ac2b9a78d4\" returns successfully" Aug 13 07:15:33.631340 containerd[1463]: time="2025-08-13T07:15:33.630919828Z" level=info msg="StopPodSandbox for \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\"" Aug 13 07:15:33.777090 containerd[1463]: 2025-08-13 07:15:33.712 [WARNING][5424] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"392a02f5-e055-4b6b-9098-a7e0976f9e04", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a", Pod:"coredns-7c65d6cfc9-sbmzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali404224b6227", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:33.777090 containerd[1463]: 2025-08-13 07:15:33.712 [INFO][5424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Aug 13 07:15:33.777090 containerd[1463]: 2025-08-13 07:15:33.712 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" iface="eth0" netns="" Aug 13 07:15:33.777090 containerd[1463]: 2025-08-13 07:15:33.712 [INFO][5424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Aug 13 07:15:33.777090 containerd[1463]: 2025-08-13 07:15:33.712 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Aug 13 07:15:33.777090 containerd[1463]: 2025-08-13 07:15:33.756 [INFO][5431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" HandleID="k8s-pod-network.f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:33.777090 containerd[1463]: 2025-08-13 07:15:33.757 [INFO][5431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:33.777090 containerd[1463]: 2025-08-13 07:15:33.757 [INFO][5431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:33.777090 containerd[1463]: 2025-08-13 07:15:33.769 [WARNING][5431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" HandleID="k8s-pod-network.f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:33.777090 containerd[1463]: 2025-08-13 07:15:33.770 [INFO][5431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" HandleID="k8s-pod-network.f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:33.777090 containerd[1463]: 2025-08-13 07:15:33.772 [INFO][5431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:33.777090 containerd[1463]: 2025-08-13 07:15:33.774 [INFO][5424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Aug 13 07:15:33.779637 containerd[1463]: time="2025-08-13T07:15:33.777612853Z" level=info msg="TearDown network for sandbox \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\" successfully" Aug 13 07:15:33.779637 containerd[1463]: time="2025-08-13T07:15:33.777767011Z" level=info msg="StopPodSandbox for \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\" returns successfully" Aug 13 07:15:33.779637 containerd[1463]: time="2025-08-13T07:15:33.779282062Z" level=info msg="RemovePodSandbox for \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\"" Aug 13 07:15:33.779637 containerd[1463]: time="2025-08-13T07:15:33.779318232Z" level=info msg="Forcibly stopping sandbox \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\"" Aug 13 07:15:33.850851 systemd[1]: run-containerd-runc-k8s.io-09b42d7b94417075c9b30293dc6bfba1d810c57c6b2e63aa8072a46eec3f2f16-runc.0yilEc.mount: Deactivated successfully. Aug 13 07:15:33.940819 kubelet[2544]: I0813 07:15:33.940664 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d84b9fbf7-fwlqp" podStartSLOduration=30.088270448 podStartE2EDuration="39.940636213s" podCreationTimestamp="2025-08-13 07:14:54 +0000 UTC" firstStartedPulling="2025-08-13 07:15:22.98166613 +0000 UTC m=+50.672287348" lastFinishedPulling="2025-08-13 07:15:32.834031888 +0000 UTC m=+60.524653113" observedRunningTime="2025-08-13 07:15:33.934857465 +0000 UTC m=+61.625478705" watchObservedRunningTime="2025-08-13 07:15:33.940636213 +0000 UTC m=+61.631257441" Aug 13 07:15:33.986321 kubelet[2544]: I0813 07:15:33.986226 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-694b6c886d-h85bl" podStartSLOduration=34.996407743 podStartE2EDuration="44.986193392s" podCreationTimestamp="2025-08-13 07:14:49 +0000 UTC" firstStartedPulling="2025-08-13 07:15:23.157264104 +0000 UTC m=+50.847885317" lastFinishedPulling="2025-08-13 07:15:33.147049754 +0000 UTC m=+60.837670966" observedRunningTime="2025-08-13 07:15:33.983374018 +0000 UTC m=+61.673995247" watchObservedRunningTime="2025-08-13 07:15:33.986193392 +0000 UTC m=+61.676814622" Aug 13 07:15:33.987608 containerd[1463]: 2025-08-13 07:15:33.857 [WARNING][5446] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"392a02f5-e055-4b6b-9098-a7e0976f9e04", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"d2995a16c1c393ba5bb22326207c80baaa568288fcf8d6cba1597f1f50459a1a", Pod:"coredns-7c65d6cfc9-sbmzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali404224b6227", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:33.987608 containerd[1463]: 2025-08-13 07:15:33.858 [INFO][5446] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Aug 13 07:15:33.987608 containerd[1463]: 2025-08-13 07:15:33.858 [INFO][5446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" iface="eth0" netns="" Aug 13 07:15:33.987608 containerd[1463]: 2025-08-13 07:15:33.858 [INFO][5446] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Aug 13 07:15:33.987608 containerd[1463]: 2025-08-13 07:15:33.859 [INFO][5446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Aug 13 07:15:33.987608 containerd[1463]: 2025-08-13 07:15:33.926 [INFO][5453] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" HandleID="k8s-pod-network.f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:33.987608 containerd[1463]: 2025-08-13 07:15:33.930 [INFO][5453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:33.987608 containerd[1463]: 2025-08-13 07:15:33.930 [INFO][5453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:33.987608 containerd[1463]: 2025-08-13 07:15:33.957 [WARNING][5453] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" HandleID="k8s-pod-network.f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:33.987608 containerd[1463]: 2025-08-13 07:15:33.957 [INFO][5453] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" HandleID="k8s-pod-network.f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--sbmzm-eth0" Aug 13 07:15:33.987608 containerd[1463]: 2025-08-13 07:15:33.970 [INFO][5453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:33.987608 containerd[1463]: 2025-08-13 07:15:33.974 [INFO][5446] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100" Aug 13 07:15:33.988454 containerd[1463]: time="2025-08-13T07:15:33.987670005Z" level=info msg="TearDown network for sandbox \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\" successfully" Aug 13 07:15:34.000276 containerd[1463]: time="2025-08-13T07:15:34.000216529Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:15:34.000449 containerd[1463]: time="2025-08-13T07:15:34.000317430Z" level=info msg="RemovePodSandbox \"f783343c292b66b068da039cc708759927677649aa3c17956b969f5fe4dd8100\" returns successfully" Aug 13 07:15:34.002276 containerd[1463]: time="2025-08-13T07:15:34.001959866Z" level=info msg="StopPodSandbox for \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\"" Aug 13 07:15:34.216230 containerd[1463]: 2025-08-13 07:15:34.100 [WARNING][5484] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--77d755b4b8--pfvbp-eth0" Aug 13 07:15:34.216230 containerd[1463]: 2025-08-13 07:15:34.101 [INFO][5484] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Aug 13 07:15:34.216230 containerd[1463]: 2025-08-13 07:15:34.101 [INFO][5484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" iface="eth0" netns="" Aug 13 07:15:34.216230 containerd[1463]: 2025-08-13 07:15:34.101 [INFO][5484] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Aug 13 07:15:34.216230 containerd[1463]: 2025-08-13 07:15:34.101 [INFO][5484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Aug 13 07:15:34.216230 containerd[1463]: 2025-08-13 07:15:34.169 [INFO][5493] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" HandleID="k8s-pod-network.9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--77d755b4b8--pfvbp-eth0" Aug 13 07:15:34.216230 containerd[1463]: 2025-08-13 07:15:34.170 [INFO][5493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:34.216230 containerd[1463]: 2025-08-13 07:15:34.170 [INFO][5493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:34.216230 containerd[1463]: 2025-08-13 07:15:34.202 [WARNING][5493] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" HandleID="k8s-pod-network.9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--77d755b4b8--pfvbp-eth0" Aug 13 07:15:34.216230 containerd[1463]: 2025-08-13 07:15:34.202 [INFO][5493] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" HandleID="k8s-pod-network.9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--77d755b4b8--pfvbp-eth0" Aug 13 07:15:34.216230 containerd[1463]: 2025-08-13 07:15:34.208 [INFO][5493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:34.216230 containerd[1463]: 2025-08-13 07:15:34.213 [INFO][5484] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Aug 13 07:15:34.216230 containerd[1463]: time="2025-08-13T07:15:34.216167535Z" level=info msg="TearDown network for sandbox \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\" successfully" Aug 13 07:15:34.216230 containerd[1463]: time="2025-08-13T07:15:34.216201941Z" level=info msg="StopPodSandbox for \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\" returns successfully" Aug 13 07:15:34.219771 containerd[1463]: time="2025-08-13T07:15:34.219723773Z" level=info msg="RemovePodSandbox for \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\"" Aug 13 07:15:34.219908 containerd[1463]: time="2025-08-13T07:15:34.219781198Z" level=info msg="Forcibly stopping sandbox \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\"" Aug 13 07:15:34.510939 containerd[1463]: 2025-08-13 07:15:34.368 [WARNING][5510] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" WorkloadEndpoint="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--77d755b4b8--pfvbp-eth0" Aug 13 07:15:34.510939 containerd[1463]: 2025-08-13 07:15:34.368 [INFO][5510] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Aug 13 07:15:34.510939 containerd[1463]: 2025-08-13 07:15:34.368 [INFO][5510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" iface="eth0" netns="" Aug 13 07:15:34.510939 containerd[1463]: 2025-08-13 07:15:34.368 [INFO][5510] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Aug 13 07:15:34.510939 containerd[1463]: 2025-08-13 07:15:34.368 [INFO][5510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Aug 13 07:15:34.510939 containerd[1463]: 2025-08-13 07:15:34.469 [INFO][5522] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" HandleID="k8s-pod-network.9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--77d755b4b8--pfvbp-eth0" Aug 13 07:15:34.510939 containerd[1463]: 2025-08-13 07:15:34.470 [INFO][5522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:34.510939 containerd[1463]: 2025-08-13 07:15:34.472 [INFO][5522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:34.510939 containerd[1463]: 2025-08-13 07:15:34.492 [WARNING][5522] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" HandleID="k8s-pod-network.9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--77d755b4b8--pfvbp-eth0" Aug 13 07:15:34.510939 containerd[1463]: 2025-08-13 07:15:34.493 [INFO][5522] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" HandleID="k8s-pod-network.9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-whisker--77d755b4b8--pfvbp-eth0" Aug 13 07:15:34.510939 containerd[1463]: 2025-08-13 07:15:34.496 [INFO][5522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:34.510939 containerd[1463]: 2025-08-13 07:15:34.504 [INFO][5510] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa" Aug 13 07:15:34.510939 containerd[1463]: time="2025-08-13T07:15:34.509732107Z" level=info msg="TearDown network for sandbox \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\" successfully" Aug 13 07:15:34.526779 containerd[1463]: time="2025-08-13T07:15:34.524425323Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:15:34.526779 containerd[1463]: time="2025-08-13T07:15:34.524694815Z" level=info msg="RemovePodSandbox \"9a6335b7d99b348ea1e8aabce1785fad70df5a2831d95b5032965639ccdbb9aa\" returns successfully" Aug 13 07:15:34.527504 containerd[1463]: time="2025-08-13T07:15:34.527449497Z" level=info msg="StopPodSandbox for \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\"" Aug 13 07:15:34.787091 containerd[1463]: 2025-08-13 07:15:34.683 [WARNING][5536] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"707e0f61-36a7-4b17-96a3-b98a0613f9bd", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170", Pod:"csi-node-driver-9swvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie446ef15128", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:34.787091 containerd[1463]: 2025-08-13 07:15:34.686 [INFO][5536] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Aug 13 07:15:34.787091 containerd[1463]: 2025-08-13 07:15:34.687 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" iface="eth0" netns="" Aug 13 07:15:34.787091 containerd[1463]: 2025-08-13 07:15:34.687 [INFO][5536] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Aug 13 07:15:34.787091 containerd[1463]: 2025-08-13 07:15:34.688 [INFO][5536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Aug 13 07:15:34.787091 containerd[1463]: 2025-08-13 07:15:34.767 [INFO][5548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" HandleID="k8s-pod-network.a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:34.787091 containerd[1463]: 2025-08-13 07:15:34.767 [INFO][5548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:34.787091 containerd[1463]: 2025-08-13 07:15:34.767 [INFO][5548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:34.787091 containerd[1463]: 2025-08-13 07:15:34.779 [WARNING][5548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" HandleID="k8s-pod-network.a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:34.787091 containerd[1463]: 2025-08-13 07:15:34.779 [INFO][5548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" HandleID="k8s-pod-network.a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:34.787091 containerd[1463]: 2025-08-13 07:15:34.782 [INFO][5548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:34.787091 containerd[1463]: 2025-08-13 07:15:34.784 [INFO][5536] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Aug 13 07:15:34.791906 containerd[1463]: time="2025-08-13T07:15:34.787226488Z" level=info msg="TearDown network for sandbox \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\" successfully" Aug 13 07:15:34.791906 containerd[1463]: time="2025-08-13T07:15:34.787581994Z" level=info msg="StopPodSandbox for \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\" returns successfully" Aug 13 07:15:34.791906 containerd[1463]: time="2025-08-13T07:15:34.789726144Z" level=info msg="RemovePodSandbox for \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\"" Aug 13 07:15:34.791906 containerd[1463]: time="2025-08-13T07:15:34.789962604Z" level=info msg="Forcibly stopping sandbox \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\"" Aug 13 07:15:35.065687 containerd[1463]: 2025-08-13 07:15:34.963 [WARNING][5566] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"707e0f61-36a7-4b17-96a3-b98a0613f9bd", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170", Pod:"csi-node-driver-9swvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie446ef15128", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:35.065687 containerd[1463]: 2025-08-13 07:15:34.963 [INFO][5566] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Aug 13 07:15:35.065687 containerd[1463]: 2025-08-13 07:15:34.963 [INFO][5566] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" iface="eth0" netns="" Aug 13 07:15:35.065687 containerd[1463]: 2025-08-13 07:15:34.964 [INFO][5566] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Aug 13 07:15:35.065687 containerd[1463]: 2025-08-13 07:15:34.964 [INFO][5566] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Aug 13 07:15:35.065687 containerd[1463]: 2025-08-13 07:15:35.033 [INFO][5591] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" HandleID="k8s-pod-network.a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:35.065687 containerd[1463]: 2025-08-13 07:15:35.034 [INFO][5591] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:35.065687 containerd[1463]: 2025-08-13 07:15:35.034 [INFO][5591] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:35.065687 containerd[1463]: 2025-08-13 07:15:35.051 [WARNING][5591] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" HandleID="k8s-pod-network.a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:35.065687 containerd[1463]: 2025-08-13 07:15:35.052 [INFO][5591] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" HandleID="k8s-pod-network.a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-csi--node--driver--9swvw-eth0" Aug 13 07:15:35.065687 containerd[1463]: 2025-08-13 07:15:35.055 [INFO][5591] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:35.065687 containerd[1463]: 2025-08-13 07:15:35.061 [INFO][5566] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb" Aug 13 07:15:35.066552 containerd[1463]: time="2025-08-13T07:15:35.066115494Z" level=info msg="TearDown network for sandbox \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\" successfully" Aug 13 07:15:35.075548 containerd[1463]: time="2025-08-13T07:15:35.074249909Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:15:35.075548 containerd[1463]: time="2025-08-13T07:15:35.074355145Z" level=info msg="RemovePodSandbox \"a8ec2cedde611f46e1a06b9fd6fdfe3acd63f0b2aa98f45b59d5a86bbbdb2ddb\" returns successfully" Aug 13 07:15:35.076869 containerd[1463]: time="2025-08-13T07:15:35.076814543Z" level=info msg="StopPodSandbox for \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\"" Aug 13 07:15:35.207299 containerd[1463]: time="2025-08-13T07:15:35.207235804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:35.210417 containerd[1463]: time="2025-08-13T07:15:35.210337631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:15:35.213192 containerd[1463]: time="2025-08-13T07:15:35.212298255Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:35.224895 containerd[1463]: time="2025-08-13T07:15:35.224643478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:35.226208 containerd[1463]: time="2025-08-13T07:15:35.226132533Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.07823469s" Aug 13 07:15:35.226208 containerd[1463]: time="2025-08-13T07:15:35.226189205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:15:35.233492 containerd[1463]: time="2025-08-13T07:15:35.233135959Z" level=info msg="CreateContainer within sandbox \"98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:15:35.271713 containerd[1463]: time="2025-08-13T07:15:35.271329405Z" level=info msg="CreateContainer within sandbox \"98553ccf1fa4d4fafa3bc585082c6fb29b15114df41f6251cf34a37cfdc0b170\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2cf315875ea3b4c0c11fb5451dd4f251cb33faf33c4401a42afcc0e5ff9c3d1d\"" Aug 13 07:15:35.274055 containerd[1463]: time="2025-08-13T07:15:35.273997503Z" level=info msg="StartContainer for \"2cf315875ea3b4c0c11fb5451dd4f251cb33faf33c4401a42afcc0e5ff9c3d1d\"" Aug 13 07:15:35.332095 containerd[1463]: 2025-08-13 07:15:35.209 [WARNING][5607] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0", GenerateName:"calico-kube-controllers-d84b9fbf7-", Namespace:"calico-system", SelfLink:"", UID:"ec5447bc-9529-4230-8377-9fbb134088cb", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d84b9fbf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56", Pod:"calico-kube-controllers-d84b9fbf7-fwlqp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidfd39197ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:35.332095 containerd[1463]: 2025-08-13 07:15:35.210 [INFO][5607] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Aug 13 07:15:35.332095 containerd[1463]: 2025-08-13 07:15:35.211 [INFO][5607] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" iface="eth0" netns="" Aug 13 07:15:35.332095 containerd[1463]: 2025-08-13 07:15:35.211 [INFO][5607] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Aug 13 07:15:35.332095 containerd[1463]: 2025-08-13 07:15:35.211 [INFO][5607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Aug 13 07:15:35.332095 containerd[1463]: 2025-08-13 07:15:35.295 [INFO][5614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" HandleID="k8s-pod-network.e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:35.332095 containerd[1463]: 2025-08-13 07:15:35.296 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:35.332095 containerd[1463]: 2025-08-13 07:15:35.296 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:35.332095 containerd[1463]: 2025-08-13 07:15:35.314 [WARNING][5614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" HandleID="k8s-pod-network.e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:35.332095 containerd[1463]: 2025-08-13 07:15:35.314 [INFO][5614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" HandleID="k8s-pod-network.e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:35.332095 containerd[1463]: 2025-08-13 07:15:35.320 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:35.332095 containerd[1463]: 2025-08-13 07:15:35.329 [INFO][5607] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Aug 13 07:15:35.332095 containerd[1463]: time="2025-08-13T07:15:35.332051248Z" level=info msg="TearDown network for sandbox \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\" successfully" Aug 13 07:15:35.334629 containerd[1463]: time="2025-08-13T07:15:35.332090552Z" level=info msg="StopPodSandbox for \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\" returns successfully" Aug 13 07:15:35.334629 containerd[1463]: time="2025-08-13T07:15:35.333431333Z" level=info msg="RemovePodSandbox for \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\"" Aug 13 07:15:35.334629 containerd[1463]: time="2025-08-13T07:15:35.333645893Z" level=info msg="Forcibly stopping sandbox \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\"" Aug 13 07:15:35.381676 systemd[1]: run-containerd-runc-k8s.io-2cf315875ea3b4c0c11fb5451dd4f251cb33faf33c4401a42afcc0e5ff9c3d1d-runc.fObBXI.mount: Deactivated successfully. Aug 13 07:15:35.397150 systemd[1]: Started cri-containerd-2cf315875ea3b4c0c11fb5451dd4f251cb33faf33c4401a42afcc0e5ff9c3d1d.scope - libcontainer container 2cf315875ea3b4c0c11fb5451dd4f251cb33faf33c4401a42afcc0e5ff9c3d1d. Aug 13 07:15:35.515335 containerd[1463]: time="2025-08-13T07:15:35.515254884Z" level=info msg="StartContainer for \"2cf315875ea3b4c0c11fb5451dd4f251cb33faf33c4401a42afcc0e5ff9c3d1d\" returns successfully" Aug 13 07:15:35.554317 containerd[1463]: 2025-08-13 07:15:35.476 [WARNING][5645] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0", GenerateName:"calico-kube-controllers-d84b9fbf7-", Namespace:"calico-system", SelfLink:"", UID:"ec5447bc-9529-4230-8377-9fbb134088cb", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d84b9fbf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"2bb3f8a70a43f59728b8943cd2bee22244c61785a60a04d6baec81c0a4ed9e56", Pod:"calico-kube-controllers-d84b9fbf7-fwlqp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidfd39197ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:35.554317 containerd[1463]: 2025-08-13 07:15:35.477 [INFO][5645] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Aug 13 07:15:35.554317 containerd[1463]: 2025-08-13 07:15:35.478 [INFO][5645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" iface="eth0" netns="" Aug 13 07:15:35.554317 containerd[1463]: 2025-08-13 07:15:35.478 [INFO][5645] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Aug 13 07:15:35.554317 containerd[1463]: 2025-08-13 07:15:35.478 [INFO][5645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Aug 13 07:15:35.554317 containerd[1463]: 2025-08-13 07:15:35.531 [INFO][5663] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" HandleID="k8s-pod-network.e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:35.554317 containerd[1463]: 2025-08-13 07:15:35.531 [INFO][5663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:35.554317 containerd[1463]: 2025-08-13 07:15:35.531 [INFO][5663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:35.554317 containerd[1463]: 2025-08-13 07:15:35.542 [WARNING][5663] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" HandleID="k8s-pod-network.e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:35.554317 containerd[1463]: 2025-08-13 07:15:35.542 [INFO][5663] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" HandleID="k8s-pod-network.e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-calico--kube--controllers--d84b9fbf7--fwlqp-eth0" Aug 13 07:15:35.554317 containerd[1463]: 2025-08-13 07:15:35.545 [INFO][5663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:35.554317 containerd[1463]: 2025-08-13 07:15:35.548 [INFO][5645] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a" Aug 13 07:15:35.554317 containerd[1463]: time="2025-08-13T07:15:35.551982016Z" level=info msg="TearDown network for sandbox \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\" successfully" Aug 13 07:15:35.561195 containerd[1463]: time="2025-08-13T07:15:35.561051673Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:15:35.561658 containerd[1463]: time="2025-08-13T07:15:35.561557570Z" level=info msg="RemovePodSandbox \"e2a495ef47d750639fcc67070056d5b0ccf6394f5a1921e2565e4886a3bd444a\" returns successfully" Aug 13 07:15:35.562895 containerd[1463]: time="2025-08-13T07:15:35.562854958Z" level=info msg="StopPodSandbox for \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\"" Aug 13 07:15:35.596448 kubelet[2544]: I0813 07:15:35.595190 2544 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:15:35.596448 kubelet[2544]: I0813 07:15:35.595263 2544 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:15:35.699760 containerd[1463]: 2025-08-13 07:15:35.643 [WARNING][5688] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2016744c-fc88-45ab-9b7e-62d31ebc5f72", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd", Pod:"goldmane-58fd7646b9-kc2rx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.94.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1aaf8264355", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:35.699760 containerd[1463]: 2025-08-13 07:15:35.644 [INFO][5688] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Aug 13 07:15:35.699760 containerd[1463]: 2025-08-13 07:15:35.644 [INFO][5688] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" iface="eth0" netns="" Aug 13 07:15:35.699760 containerd[1463]: 2025-08-13 07:15:35.644 [INFO][5688] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Aug 13 07:15:35.699760 containerd[1463]: 2025-08-13 07:15:35.644 [INFO][5688] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Aug 13 07:15:35.699760 containerd[1463]: 2025-08-13 07:15:35.686 [INFO][5695] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" HandleID="k8s-pod-network.73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:35.699760 containerd[1463]: 2025-08-13 07:15:35.686 [INFO][5695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:35.699760 containerd[1463]: 2025-08-13 07:15:35.686 [INFO][5695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:35.699760 containerd[1463]: 2025-08-13 07:15:35.694 [WARNING][5695] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" HandleID="k8s-pod-network.73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:35.699760 containerd[1463]: 2025-08-13 07:15:35.694 [INFO][5695] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" HandleID="k8s-pod-network.73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:35.699760 containerd[1463]: 2025-08-13 07:15:35.696 [INFO][5695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:35.699760 containerd[1463]: 2025-08-13 07:15:35.697 [INFO][5688] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Aug 13 07:15:35.699760 containerd[1463]: time="2025-08-13T07:15:35.699755573Z" level=info msg="TearDown network for sandbox \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\" successfully" Aug 13 07:15:35.701052 containerd[1463]: time="2025-08-13T07:15:35.699792124Z" level=info msg="StopPodSandbox for \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\" returns successfully" Aug 13 07:15:35.701052 containerd[1463]: time="2025-08-13T07:15:35.700805942Z" level=info msg="RemovePodSandbox for \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\"" Aug 13 07:15:35.701052 containerd[1463]: time="2025-08-13T07:15:35.700844580Z" level=info msg="Forcibly stopping sandbox \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\"" Aug 13 07:15:35.828573 containerd[1463]: 2025-08-13 07:15:35.757 [WARNING][5710] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2016744c-fc88-45ab-9b7e-62d31ebc5f72", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-5ace70e1da4b42a7cec2.c.flatcar-212911.internal", ContainerID:"73875145fc3ffa51af9237f720c31a8d7797e9bea77c98c0aa2708b0f2ec99fd", Pod:"goldmane-58fd7646b9-kc2rx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.94.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1aaf8264355", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:35.828573 containerd[1463]: 2025-08-13 07:15:35.758 [INFO][5710] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Aug 13 07:15:35.828573 containerd[1463]: 2025-08-13 07:15:35.758 [INFO][5710] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" iface="eth0" netns="" Aug 13 07:15:35.828573 containerd[1463]: 2025-08-13 07:15:35.758 [INFO][5710] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Aug 13 07:15:35.828573 containerd[1463]: 2025-08-13 07:15:35.758 [INFO][5710] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Aug 13 07:15:35.828573 containerd[1463]: 2025-08-13 07:15:35.803 [INFO][5717] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" HandleID="k8s-pod-network.73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:35.828573 containerd[1463]: 2025-08-13 07:15:35.804 [INFO][5717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:35.828573 containerd[1463]: 2025-08-13 07:15:35.804 [INFO][5717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:35.828573 containerd[1463]: 2025-08-13 07:15:35.819 [WARNING][5717] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" HandleID="k8s-pod-network.73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:35.828573 containerd[1463]: 2025-08-13 07:15:35.819 [INFO][5717] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" HandleID="k8s-pod-network.73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Workload="ci--4081--3--5--5ace70e1da4b42a7cec2.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--kc2rx-eth0" Aug 13 07:15:35.828573 containerd[1463]: 2025-08-13 07:15:35.821 [INFO][5717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:35.828573 containerd[1463]: 2025-08-13 07:15:35.825 [INFO][5710] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443" Aug 13 07:15:35.831202 containerd[1463]: time="2025-08-13T07:15:35.828631374Z" level=info msg="TearDown network for sandbox \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\" successfully" Aug 13 07:15:35.834831 containerd[1463]: time="2025-08-13T07:15:35.834759080Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:15:35.835333 containerd[1463]: time="2025-08-13T07:15:35.834858078Z" level=info msg="RemovePodSandbox \"73815d3a4bd9a99ae51a97cefa874ef88995f8e233bc044bf0f207e15f7ab443\" returns successfully" Aug 13 07:15:35.950655 kubelet[2544]: I0813 07:15:35.950522 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:15:35.974141 kubelet[2544]: I0813 07:15:35.974036 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9swvw" podStartSLOduration=25.96373464 podStartE2EDuration="41.974009221s" podCreationTimestamp="2025-08-13 07:14:54 +0000 UTC" firstStartedPulling="2025-08-13 07:15:19.218860372 +0000 UTC m=+46.909481592" lastFinishedPulling="2025-08-13 07:15:35.229134966 +0000 UTC m=+62.919756173" observedRunningTime="2025-08-13 07:15:35.971837936 +0000 UTC m=+63.662459166" watchObservedRunningTime="2025-08-13 07:15:35.974009221 +0000 UTC m=+63.664630449" Aug 13 07:15:44.457009 systemd[1]: Started sshd@7-10.128.0.36:22-139.178.68.195:38322.service - OpenSSH per-connection server daemon (139.178.68.195:38322). Aug 13 07:15:44.745452 sshd[5792]: Accepted publickey for core from 139.178.68.195 port 38322 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:15:44.747796 sshd[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:15:44.754662 systemd-logind[1440]: New session 8 of user core. Aug 13 07:15:44.761701 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:15:45.079749 sshd[5792]: pam_unix(sshd:session): session closed for user core Aug 13 07:15:45.084498 systemd[1]: sshd@7-10.128.0.36:22-139.178.68.195:38322.service: Deactivated successfully. Aug 13 07:15:45.087323 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:15:45.089957 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:15:45.091761 systemd-logind[1440]: Removed session 8. Aug 13 07:15:50.134914 systemd[1]: Started sshd@8-10.128.0.36:22-139.178.68.195:46438.service - OpenSSH per-connection server daemon (139.178.68.195:46438). Aug 13 07:15:50.426453 sshd[5819]: Accepted publickey for core from 139.178.68.195 port 46438 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:15:50.429812 sshd[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:15:50.438593 systemd-logind[1440]: New session 9 of user core. Aug 13 07:15:50.443322 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:15:50.726452 sshd[5819]: pam_unix(sshd:session): session closed for user core Aug 13 07:15:50.731387 systemd[1]: sshd@8-10.128.0.36:22-139.178.68.195:46438.service: Deactivated successfully. Aug 13 07:15:50.734933 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:15:50.737546 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:15:50.739376 systemd-logind[1440]: Removed session 9. Aug 13 07:15:55.788997 systemd[1]: Started sshd@9-10.128.0.36:22-139.178.68.195:46454.service - OpenSSH per-connection server daemon (139.178.68.195:46454). Aug 13 07:15:56.096728 sshd[5833]: Accepted publickey for core from 139.178.68.195 port 46454 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:15:56.099131 sshd[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:15:56.108628 systemd-logind[1440]: New session 10 of user core. Aug 13 07:15:56.115682 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:15:56.393864 sshd[5833]: pam_unix(sshd:session): session closed for user core Aug 13 07:15:56.400343 systemd[1]: sshd@9-10.128.0.36:22-139.178.68.195:46454.service: Deactivated successfully. Aug 13 07:15:56.403668 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:15:56.404949 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:15:56.406628 systemd-logind[1440]: Removed session 10. Aug 13 07:15:56.452108 systemd[1]: Started sshd@10-10.128.0.36:22-139.178.68.195:46458.service - OpenSSH per-connection server daemon (139.178.68.195:46458). Aug 13 07:15:56.657887 kubelet[2544]: I0813 07:15:56.657210 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:15:56.737795 sshd[5846]: Accepted publickey for core from 139.178.68.195 port 46458 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:15:56.739867 sshd[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:15:56.746340 systemd-logind[1440]: New session 11 of user core. Aug 13 07:15:56.750673 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:15:57.080323 sshd[5846]: pam_unix(sshd:session): session closed for user core Aug 13 07:15:57.085229 systemd[1]: sshd@10-10.128.0.36:22-139.178.68.195:46458.service: Deactivated successfully. Aug 13 07:15:57.088195 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:15:57.090368 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:15:57.092640 systemd-logind[1440]: Removed session 11. Aug 13 07:15:57.138558 systemd[1]: Started sshd@11-10.128.0.36:22-139.178.68.195:46474.service - OpenSSH per-connection server daemon (139.178.68.195:46474). Aug 13 07:15:57.429005 sshd[5859]: Accepted publickey for core from 139.178.68.195 port 46474 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:15:57.432712 sshd[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:15:57.443127 systemd-logind[1440]: New session 12 of user core. Aug 13 07:15:57.446693 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:15:57.740127 sshd[5859]: pam_unix(sshd:session): session closed for user core Aug 13 07:15:57.746108 systemd[1]: sshd@11-10.128.0.36:22-139.178.68.195:46474.service: Deactivated successfully. Aug 13 07:15:57.748924 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:15:57.750154 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:15:57.751843 systemd-logind[1440]: Removed session 12. Aug 13 07:16:02.798213 systemd[1]: Started sshd@12-10.128.0.36:22-139.178.68.195:60992.service - OpenSSH per-connection server daemon (139.178.68.195:60992). Aug 13 07:16:03.077097 sshd[5891]: Accepted publickey for core from 139.178.68.195 port 60992 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:16:03.079262 sshd[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:03.085809 systemd-logind[1440]: New session 13 of user core. Aug 13 07:16:03.094766 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:16:03.368627 sshd[5891]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:03.374113 systemd[1]: sshd@12-10.128.0.36:22-139.178.68.195:60992.service: Deactivated successfully. Aug 13 07:16:03.377554 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:16:03.379035 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:16:03.381396 systemd-logind[1440]: Removed session 13. Aug 13 07:16:04.829560 systemd[1]: run-containerd-runc-k8s.io-09b42d7b94417075c9b30293dc6bfba1d810c57c6b2e63aa8072a46eec3f2f16-runc.vR4Lfe.mount: Deactivated successfully. Aug 13 07:16:08.424889 systemd[1]: Started sshd@13-10.128.0.36:22-139.178.68.195:32768.service - OpenSSH per-connection server daemon (139.178.68.195:32768). Aug 13 07:16:08.705892 sshd[5955]: Accepted publickey for core from 139.178.68.195 port 32768 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:16:08.707908 sshd[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:08.715042 systemd-logind[1440]: New session 14 of user core. Aug 13 07:16:08.720699 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:16:09.000033 sshd[5955]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:09.005898 systemd[1]: sshd@13-10.128.0.36:22-139.178.68.195:32768.service: Deactivated successfully. Aug 13 07:16:09.008919 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:16:09.010180 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:16:09.012345 systemd-logind[1440]: Removed session 14. Aug 13 07:16:12.666916 systemd[1]: run-containerd-runc-k8s.io-b68e7a45410416b79d66ec6a4bacdd8ed896c9b8518ccaeb22b1459213c25560-runc.uQM7k6.mount: Deactivated successfully. Aug 13 07:16:14.057955 systemd[1]: Started sshd@14-10.128.0.36:22-139.178.68.195:47656.service - OpenSSH per-connection server daemon (139.178.68.195:47656). Aug 13 07:16:14.341982 sshd[5991]: Accepted publickey for core from 139.178.68.195 port 47656 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:16:14.344039 sshd[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:14.350493 systemd-logind[1440]: New session 15 of user core. Aug 13 07:16:14.360698 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:16:14.640331 sshd[5991]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:14.645432 systemd[1]: sshd@14-10.128.0.36:22-139.178.68.195:47656.service: Deactivated successfully. Aug 13 07:16:14.648318 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:16:14.650686 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:16:14.652631 systemd-logind[1440]: Removed session 15. Aug 13 07:16:19.698881 systemd[1]: Started sshd@15-10.128.0.36:22-139.178.68.195:47666.service - OpenSSH per-connection server daemon (139.178.68.195:47666). Aug 13 07:16:19.982713 sshd[6003]: Accepted publickey for core from 139.178.68.195 port 47666 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:16:19.984768 sshd[6003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:19.991924 systemd-logind[1440]: New session 16 of user core. Aug 13 07:16:19.997699 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:16:20.291355 sshd[6003]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:20.297647 systemd[1]: sshd@15-10.128.0.36:22-139.178.68.195:47666.service: Deactivated successfully. Aug 13 07:16:20.300706 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:16:20.302056 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:16:20.303790 systemd-logind[1440]: Removed session 16. Aug 13 07:16:20.349939 systemd[1]: Started sshd@16-10.128.0.36:22-139.178.68.195:34960.service - OpenSSH per-connection server daemon (139.178.68.195:34960). Aug 13 07:16:20.633921 sshd[6016]: Accepted publickey for core from 139.178.68.195 port 34960 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:16:20.636546 sshd[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:20.647803 systemd-logind[1440]: New session 17 of user core. Aug 13 07:16:20.653886 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:16:20.982416 sshd[6016]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:20.987720 systemd[1]: sshd@16-10.128.0.36:22-139.178.68.195:34960.service: Deactivated successfully. Aug 13 07:16:20.991171 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:16:20.993537 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:16:20.995219 systemd-logind[1440]: Removed session 17. Aug 13 07:16:21.038242 systemd[1]: Started sshd@17-10.128.0.36:22-139.178.68.195:34966.service - OpenSSH per-connection server daemon (139.178.68.195:34966). Aug 13 07:16:21.328860 sshd[6027]: Accepted publickey for core from 139.178.68.195 port 34966 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:16:21.330902 sshd[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:21.337591 systemd-logind[1440]: New session 18 of user core. Aug 13 07:16:21.343713 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:16:23.909798 sshd[6027]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:23.920179 systemd[1]: sshd@17-10.128.0.36:22-139.178.68.195:34966.service: Deactivated successfully. Aug 13 07:16:23.927915 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:16:23.931962 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:16:23.935048 systemd-logind[1440]: Removed session 18. Aug 13 07:16:23.967416 systemd[1]: Started sshd@18-10.128.0.36:22-139.178.68.195:34980.service - OpenSSH per-connection server daemon (139.178.68.195:34980). Aug 13 07:16:24.290599 sshd[6043]: Accepted publickey for core from 139.178.68.195 port 34980 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:16:24.293142 sshd[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:24.300580 systemd-logind[1440]: New session 19 of user core. Aug 13 07:16:24.309786 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:16:24.734557 sshd[6043]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:24.739206 systemd[1]: sshd@18-10.128.0.36:22-139.178.68.195:34980.service: Deactivated successfully. Aug 13 07:16:24.742343 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:16:24.744826 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:16:24.746459 systemd-logind[1440]: Removed session 19. Aug 13 07:16:24.791873 systemd[1]: Started sshd@19-10.128.0.36:22-139.178.68.195:34986.service - OpenSSH per-connection server daemon (139.178.68.195:34986). Aug 13 07:16:25.073755 sshd[6056]: Accepted publickey for core from 139.178.68.195 port 34986 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:16:25.075845 sshd[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:25.082503 systemd-logind[1440]: New session 20 of user core. Aug 13 07:16:25.086663 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:16:25.361304 sshd[6056]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:25.366250 systemd[1]: sshd@19-10.128.0.36:22-139.178.68.195:34986.service: Deactivated successfully. Aug 13 07:16:25.369687 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:16:25.372175 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:16:25.374001 systemd-logind[1440]: Removed session 20. Aug 13 07:16:30.427177 systemd[1]: Started sshd@20-10.128.0.36:22-139.178.68.195:40826.service - OpenSSH per-connection server daemon (139.178.68.195:40826). Aug 13 07:16:30.723963 sshd[6073]: Accepted publickey for core from 139.178.68.195 port 40826 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:16:30.726796 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:30.737697 systemd-logind[1440]: New session 21 of user core. Aug 13 07:16:30.743731 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:16:31.060907 sshd[6073]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:31.067436 systemd[1]: sshd@20-10.128.0.36:22-139.178.68.195:40826.service: Deactivated successfully. Aug 13 07:16:31.071530 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:16:31.076214 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:16:31.079135 systemd-logind[1440]: Removed session 21. Aug 13 07:16:34.857348 systemd[1]: run-containerd-runc-k8s.io-09b42d7b94417075c9b30293dc6bfba1d810c57c6b2e63aa8072a46eec3f2f16-runc.2m0v9O.mount: Deactivated successfully. Aug 13 07:16:36.120371 systemd[1]: Started sshd@21-10.128.0.36:22-139.178.68.195:40842.service - OpenSSH per-connection server daemon (139.178.68.195:40842). Aug 13 07:16:36.445753 sshd[6136]: Accepted publickey for core from 139.178.68.195 port 40842 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:16:36.446629 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:36.463562 systemd-logind[1440]: New session 22 of user core. Aug 13 07:16:36.468093 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:16:36.789361 sshd[6136]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:36.797266 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:16:36.798990 systemd[1]: sshd@21-10.128.0.36:22-139.178.68.195:40842.service: Deactivated successfully. Aug 13 07:16:36.803313 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:16:36.807290 systemd-logind[1440]: Removed session 22. Aug 13 07:16:41.858594 systemd[1]: Started sshd@22-10.128.0.36:22-139.178.68.195:54752.service - OpenSSH per-connection server daemon (139.178.68.195:54752). Aug 13 07:16:42.169013 sshd[6162]: Accepted publickey for core from 139.178.68.195 port 54752 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:16:42.171527 sshd[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:42.179018 systemd-logind[1440]: New session 23 of user core. Aug 13 07:16:42.187730 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:16:42.506184 sshd[6162]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:42.517368 systemd[1]: sshd@22-10.128.0.36:22-139.178.68.195:54752.service: Deactivated successfully. Aug 13 07:16:42.524098 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:16:42.528456 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:16:42.534451 systemd-logind[1440]: Removed session 23. Aug 13 07:16:42.689722 systemd[1]: run-containerd-runc-k8s.io-b68e7a45410416b79d66ec6a4bacdd8ed896c9b8518ccaeb22b1459213c25560-runc.dKZgTr.mount: Deactivated successfully.