Aug 13 07:10:39.109109 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:10:39.109162 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:10:39.109181 kernel: BIOS-provided physical RAM map: Aug 13 07:10:39.109197 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Aug 13 07:10:39.109211 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Aug 13 07:10:39.109225 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Aug 13 07:10:39.109250 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Aug 13 07:10:39.109269 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Aug 13 07:10:39.109284 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Aug 13 07:10:39.109300 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Aug 13 07:10:39.109315 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Aug 13 07:10:39.109331 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Aug 13 07:10:39.109345 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Aug 13 07:10:39.109360 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Aug 13 07:10:39.109391 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Aug 13 07:10:39.109409 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Aug 13 07:10:39.109427 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Aug 13 07:10:39.109443 kernel: NX (Execute Disable) protection: active Aug 13 07:10:39.109461 kernel: APIC: Static calls initialized Aug 13 07:10:39.109477 kernel: efi: EFI v2.7 by EDK II Aug 13 07:10:39.109527 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Aug 13 07:10:39.109543 kernel: SMBIOS 2.4 present. Aug 13 07:10:39.109557 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Aug 13 07:10:39.109570 kernel: Hypervisor detected: KVM Aug 13 07:10:39.109589 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:10:39.109603 kernel: kvm-clock: using sched offset of 12575236028 cycles Aug 13 07:10:39.109617 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:10:39.109633 kernel: tsc: Detected 2299.998 MHz processor Aug 13 07:10:39.109649 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:10:39.109664 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:10:39.109679 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Aug 13 07:10:39.109694 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Aug 13 07:10:39.109710 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:10:39.109729 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Aug 13 07:10:39.109744 kernel: Using GB pages for direct mapping Aug 13 07:10:39.109760 kernel: Secure boot disabled Aug 13 07:10:39.109775 kernel: ACPI: Early table checksum verification disabled Aug 13 07:10:39.109791 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Aug 13 07:10:39.109806 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Aug 13 07:10:39.109823 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Aug 13 07:10:39.109845 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Aug 13 07:10:39.109865 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Aug 13 07:10:39.109881 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Aug 13 07:10:39.109898 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Aug 13 07:10:39.109915 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Aug 13 07:10:39.109932 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Aug 13 07:10:39.109949 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Aug 13 07:10:39.109969 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Aug 13 07:10:39.109986 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Aug 13 07:10:39.110003 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Aug 13 07:10:39.110020 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Aug 13 07:10:39.110037 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Aug 13 07:10:39.110053 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Aug 13 07:10:39.110071 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Aug 13 07:10:39.110087 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Aug 13 07:10:39.110104 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Aug 13 07:10:39.110125 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Aug 13 07:10:39.110151 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:10:39.110168 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:10:39.110185 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:10:39.110202 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Aug 13 07:10:39.110220 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Aug 13 07:10:39.110250 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Aug 13 07:10:39.110267 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Aug 13 07:10:39.110285 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Aug 13 07:10:39.110306 kernel: Zone ranges: Aug 13 07:10:39.110323 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:10:39.110341 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 07:10:39.110359 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Aug 13 07:10:39.110376 kernel: Movable zone start for each node Aug 13 07:10:39.110394 kernel: Early memory node ranges Aug 13 07:10:39.110411 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Aug 13 07:10:39.110428 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Aug 13 07:10:39.110446 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Aug 13 07:10:39.110467 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Aug 13 07:10:39.110484 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Aug 13 07:10:39.110525 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Aug 13 07:10:39.110542 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:10:39.110560 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Aug 13 07:10:39.110579 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Aug 13 07:10:39.110597 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 13 07:10:39.110616 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Aug 13 07:10:39.110634 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 13 07:10:39.110652 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:10:39.110676 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:10:39.110695 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:10:39.110713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:10:39.110731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:10:39.110750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:10:39.110768 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:10:39.110787 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:10:39.110804 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 07:10:39.110822 kernel: Booting paravirtualized kernel on KVM Aug 13 07:10:39.110843 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:10:39.110860 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:10:39.110877 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:10:39.110895 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:10:39.110911 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:10:39.110928 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:10:39.110946 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:10:39.110965 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:10:39.110986 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:10:39.111003 kernel: random: crng init done Aug 13 07:10:39.111019 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 07:10:39.111035 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:10:39.111051 kernel: Fallback order for Node 0: 0 Aug 13 07:10:39.111068 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Aug 13 07:10:39.111085 kernel: Policy zone: Normal Aug 13 07:10:39.111102 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:10:39.111119 kernel: software IO TLB: area num 2. Aug 13 07:10:39.111141 kernel: Memory: 7513396K/7860584K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 346928K reserved, 0K cma-reserved) Aug 13 07:10:39.111158 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:10:39.111175 kernel: Kernel/User page tables isolation: enabled Aug 13 07:10:39.111192 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:10:39.111209 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:10:39.111227 kernel: Dynamic Preempt: voluntary Aug 13 07:10:39.111259 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:10:39.111279 kernel: rcu: RCU event tracing is enabled. Aug 13 07:10:39.111315 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:10:39.111334 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:10:39.111352 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:10:39.111375 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:10:39.111394 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:10:39.111413 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:10:39.111432 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 07:10:39.111451 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:10:39.111470 kernel: Console: colour dummy device 80x25 Aug 13 07:10:39.111530 kernel: printk: console [ttyS0] enabled Aug 13 07:10:39.111550 kernel: ACPI: Core revision 20230628 Aug 13 07:10:39.111570 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:10:39.111586 kernel: x2apic enabled Aug 13 07:10:39.111606 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:10:39.111626 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Aug 13 07:10:39.111646 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 07:10:39.111664 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Aug 13 07:10:39.111689 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Aug 13 07:10:39.111708 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Aug 13 07:10:39.111728 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:10:39.111748 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 07:10:39.111767 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 07:10:39.111787 kernel: Spectre V2 : Mitigation: IBRS Aug 13 07:10:39.111807 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:10:39.111826 kernel: RETBleed: Mitigation: IBRS Aug 13 07:10:39.111846 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:10:39.111869 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Aug 13 07:10:39.111889 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:10:39.111908 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:10:39.111927 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:10:39.111946 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:10:39.111965 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:10:39.111984 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:10:39.112004 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:10:39.112023 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:10:39.112047 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:10:39.112067 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:10:39.112086 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:10:39.112102 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:10:39.112121 kernel: landlock: Up and running. Aug 13 07:10:39.112139 kernel: SELinux: Initializing. Aug 13 07:10:39.112158 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:10:39.112177 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:10:39.112196 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Aug 13 07:10:39.112219 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:10:39.112245 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:10:39.112265 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:10:39.112284 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Aug 13 07:10:39.112301 kernel: signal: max sigframe size: 1776 Aug 13 07:10:39.112316 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:10:39.112336 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:10:39.112356 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:10:39.112376 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:10:39.112400 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:10:39.112420 kernel: .... node #0, CPUs: #1 Aug 13 07:10:39.112441 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 13 07:10:39.112461 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 07:10:39.112477 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:10:39.112511 kernel: smpboot: Max logical packages: 1 Aug 13 07:10:39.112532 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Aug 13 07:10:39.112551 kernel: devtmpfs: initialized Aug 13 07:10:39.112576 kernel: x86/mm: Memory block size: 128MB Aug 13 07:10:39.112604 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Aug 13 07:10:39.112624 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:10:39.112642 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:10:39.112668 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:10:39.112688 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:10:39.112708 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:10:39.112729 kernel: audit: type=2000 audit(1755069037.721:1): state=initialized audit_enabled=0 res=1 Aug 13 07:10:39.112747 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:10:39.112771 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:10:39.112791 kernel: cpuidle: using governor menu Aug 13 07:10:39.112809 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:10:39.112829 kernel: dca service started, version 1.12.1 Aug 13 07:10:39.112849 kernel: PCI: Using configuration type 1 for base access Aug 13 07:10:39.112869 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:10:39.112889 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:10:39.112909 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:10:39.112929 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:10:39.112971 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:10:39.112991 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:10:39.113010 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:10:39.113029 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:10:39.113049 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 13 07:10:39.113069 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:10:39.113095 kernel: ACPI: Interpreter enabled Aug 13 07:10:39.113115 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 07:10:39.113135 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:10:39.113164 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:10:39.113185 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 07:10:39.113204 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 13 07:10:39.113224 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:10:39.113556 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:10:39.113765 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 07:10:39.113954 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 07:10:39.113985 kernel: PCI host bridge to bus 0000:00 Aug 13 07:10:39.114172 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:10:39.114359 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:10:39.114580 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:10:39.114752 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Aug 13 07:10:39.114927 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:10:39.115165 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 07:10:39.115415 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Aug 13 07:10:39.115659 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 07:10:39.115870 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 13 07:10:39.116080 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Aug 13 07:10:39.116290 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 13 07:10:39.116487 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Aug 13 07:10:39.116719 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:10:39.116913 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Aug 13 07:10:39.117107 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Aug 13 07:10:39.117318 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:10:39.117537 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Aug 13 07:10:39.117744 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Aug 13 07:10:39.117769 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:10:39.117795 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:10:39.117815 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:10:39.117836 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:10:39.117856 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 07:10:39.117876 kernel: iommu: Default domain type: Translated Aug 13 07:10:39.117896 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:10:39.117915 kernel: efivars: Registered efivars operations Aug 13 07:10:39.117935 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:10:39.117955 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:10:39.117978 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Aug 13 07:10:39.117997 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Aug 13 07:10:39.118023 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Aug 13 07:10:39.118041 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Aug 13 07:10:39.118061 kernel: vgaarb: loaded Aug 13 07:10:39.118081 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:10:39.118101 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:10:39.118120 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:10:39.118139 kernel: pnp: PnP ACPI init Aug 13 07:10:39.118164 kernel: pnp: PnP ACPI: found 7 devices Aug 13 07:10:39.118190 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:10:39.118219 kernel: NET: Registered PF_INET protocol family Aug 13 07:10:39.118245 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:10:39.118266 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 07:10:39.118286 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:10:39.118306 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:10:39.118326 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 07:10:39.118346 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 07:10:39.118369 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:10:39.118390 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:10:39.118410 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:10:39.118430 kernel: NET: Registered PF_XDP protocol family Aug 13 07:10:39.118691 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:10:39.118867 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:10:39.119041 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:10:39.119207 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Aug 13 07:10:39.119469 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:10:39.119550 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:10:39.119571 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 07:10:39.119590 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Aug 13 07:10:39.119610 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:10:39.119629 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 07:10:39.119649 kernel: clocksource: Switched to clocksource tsc Aug 13 07:10:39.119668 kernel: Initialise system trusted keyrings Aug 13 07:10:39.119694 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 07:10:39.119713 kernel: Key type asymmetric registered Aug 13 07:10:39.119733 kernel: Asymmetric key parser 'x509' registered Aug 13 07:10:39.119751 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:10:39.119771 kernel: io scheduler mq-deadline registered Aug 13 07:10:39.119790 kernel: io scheduler kyber registered Aug 13 07:10:39.119809 kernel: io scheduler bfq registered Aug 13 07:10:39.119828 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:10:39.119849 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 07:10:39.120051 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Aug 13 07:10:39.120075 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 13 07:10:39.120278 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Aug 13 07:10:39.120302 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 07:10:39.120485 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Aug 13 07:10:39.120532 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:10:39.120552 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:10:39.120572 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 07:10:39.120591 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Aug 13 07:10:39.120616 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Aug 13 07:10:39.120820 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Aug 13 07:10:39.120846 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:10:39.120865 kernel: i8042: Warning: Keylock active Aug 13 07:10:39.120889 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:10:39.120909 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:10:39.121103 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 13 07:10:39.121302 kernel: rtc_cmos 00:00: registered as rtc0 Aug 13 07:10:39.121475 kernel: rtc_cmos 00:00: setting system clock to 2025-08-13T07:10:38 UTC (1755069038) Aug 13 07:10:39.121687 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 13 07:10:39.121711 kernel: intel_pstate: CPU model not supported Aug 13 07:10:39.121736 kernel: pstore: Using crash dump compression: deflate Aug 13 07:10:39.121755 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:10:39.121775 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:10:39.121794 kernel: Segment Routing with IPv6 Aug 13 07:10:39.121813 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:10:39.121838 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:10:39.121857 kernel: Key type dns_resolver registered Aug 13 07:10:39.121876 kernel: IPI shorthand broadcast: enabled Aug 13 07:10:39.121895 kernel: sched_clock: Marking stable (892004847, 166930544)->(1140076422, -81141031) Aug 13 07:10:39.121915 kernel: registered taskstats version 1 Aug 13 07:10:39.121934 kernel: Loading compiled-in X.509 certificates Aug 13 07:10:39.121953 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:10:39.121972 kernel: Key type .fscrypt registered Aug 13 07:10:39.121990 kernel: Key type fscrypt-provisioning registered Aug 13 07:10:39.122014 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:10:39.122034 kernel: ima: No architecture policies found Aug 13 07:10:39.122053 kernel: clk: Disabling unused clocks Aug 13 07:10:39.122072 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:10:39.122091 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:10:39.122111 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:10:39.122130 kernel: Run /init as init process Aug 13 07:10:39.122149 kernel: with arguments: Aug 13 07:10:39.122172 kernel: /init Aug 13 07:10:39.122191 kernel: with environment: Aug 13 07:10:39.122209 kernel: HOME=/ Aug 13 07:10:39.122235 kernel: TERM=linux Aug 13 07:10:39.122260 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:10:39.122280 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:10:39.122303 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:10:39.122331 systemd[1]: Detected virtualization google. Aug 13 07:10:39.122357 systemd[1]: Detected architecture x86-64. Aug 13 07:10:39.122376 systemd[1]: Running in initrd. Aug 13 07:10:39.122400 systemd[1]: No hostname configured, using default hostname. Aug 13 07:10:39.122420 systemd[1]: Hostname set to . Aug 13 07:10:39.122446 systemd[1]: Initializing machine ID from random generator. Aug 13 07:10:39.122467 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:10:39.122487 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:10:39.122537 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:10:39.122563 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:10:39.122584 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:10:39.122604 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:10:39.122625 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:10:39.122648 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:10:39.122669 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:10:39.122693 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:10:39.122713 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:10:39.122734 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:10:39.122774 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:10:39.122799 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:10:39.122820 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:10:39.122840 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:10:39.122865 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:10:39.122887 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:10:39.122909 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:10:39.122930 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:10:39.122951 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:10:39.122978 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:10:39.122999 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:10:39.123020 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:10:39.123045 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:10:39.123066 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:10:39.123087 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:10:39.123108 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:10:39.123130 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:10:39.123151 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:10:39.123208 systemd-journald[183]: Collecting audit messages is disabled. Aug 13 07:10:39.123264 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:10:39.123285 systemd-journald[183]: Journal started Aug 13 07:10:39.123331 systemd-journald[183]: Runtime Journal (/run/log/journal/9cfe884d1d234d5d9a618f1ea6075346) is 8.0M, max 148.7M, 140.7M free. Aug 13 07:10:39.124083 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:10:39.129924 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 07:10:39.134040 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:10:39.140735 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:10:39.153785 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:10:39.157709 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:10:39.161983 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:10:39.172222 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:10:39.186036 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:10:39.197518 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:10:39.200405 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:10:39.201653 kernel: Bridge firewalling registered Aug 13 07:10:39.204538 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 07:10:39.204946 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:10:39.208567 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:10:39.212740 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:10:39.232082 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:10:39.242577 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:10:39.251046 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:10:39.256008 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:10:39.266735 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:10:39.289018 dracut-cmdline[213]: dracut-dracut-053 Aug 13 07:10:39.293826 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:10:39.320910 systemd-resolved[217]: Positive Trust Anchors: Aug 13 07:10:39.321485 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:10:39.321707 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:10:39.329522 systemd-resolved[217]: Defaulting to hostname 'linux'. Aug 13 07:10:39.334349 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:10:39.344285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:10:39.403543 kernel: SCSI subsystem initialized Aug 13 07:10:39.415520 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:10:39.428543 kernel: iscsi: registered transport (tcp) Aug 13 07:10:39.454538 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:10:39.454621 kernel: QLogic iSCSI HBA Driver Aug 13 07:10:39.508564 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:10:39.517743 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:10:39.547131 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:10:39.547220 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:10:39.547269 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:10:39.593533 kernel: raid6: avx2x4 gen() 17898 MB/s Aug 13 07:10:39.610527 kernel: raid6: avx2x2 gen() 17860 MB/s Aug 13 07:10:39.628142 kernel: raid6: avx2x1 gen() 13939 MB/s Aug 13 07:10:39.628199 kernel: raid6: using algorithm avx2x4 gen() 17898 MB/s Aug 13 07:10:39.646105 kernel: raid6: .... xor() 7499 MB/s, rmw enabled Aug 13 07:10:39.646165 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:10:39.670544 kernel: xor: automatically using best checksumming function avx Aug 13 07:10:39.846533 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:10:39.860971 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:10:39.868763 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:10:39.895546 systemd-udevd[400]: Using default interface naming scheme 'v255'. Aug 13 07:10:39.902504 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:10:39.915488 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:10:39.946865 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Aug 13 07:10:39.986613 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:10:39.998790 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:10:40.092555 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:10:40.109782 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:10:40.147157 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:10:40.159554 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:10:40.167618 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:10:40.175684 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:10:40.184735 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:10:40.202554 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:10:40.218529 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:10:40.220514 kernel: AES CTR mode by8 optimization enabled Aug 13 07:10:40.224751 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:10:40.364711 kernel: scsi host0: Virtio SCSI HBA Aug 13 07:10:40.380589 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Aug 13 07:10:40.401657 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:10:40.401846 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:10:40.407001 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:10:40.408763 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:10:40.409022 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:10:40.435446 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Aug 13 07:10:40.435825 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Aug 13 07:10:40.415536 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:10:40.444337 kernel: sd 0:0:1:0: [sda] Write Protect is off Aug 13 07:10:40.444981 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Aug 13 07:10:40.445506 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 07:10:40.432156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:10:40.451982 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:10:40.452039 kernel: GPT:17805311 != 25165823 Aug 13 07:10:40.452065 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:10:40.452870 kernel: GPT:17805311 != 25165823 Aug 13 07:10:40.454197 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:10:40.454242 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:10:40.455985 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Aug 13 07:10:40.472314 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:10:40.480673 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:10:40.530578 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (442) Aug 13 07:10:40.535817 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:10:40.567666 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (459) Aug 13 07:10:40.574903 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Aug 13 07:10:40.581567 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Aug 13 07:10:40.612330 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Aug 13 07:10:40.628656 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Aug 13 07:10:40.659191 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 13 07:10:40.680739 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:10:40.726671 disk-uuid[548]: Primary Header is updated. Aug 13 07:10:40.726671 disk-uuid[548]: Secondary Entries is updated. Aug 13 07:10:40.726671 disk-uuid[548]: Secondary Header is updated. Aug 13 07:10:40.757531 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:10:40.757577 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:10:40.786785 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:10:41.801542 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:10:41.801628 disk-uuid[549]: The operation has completed successfully. Aug 13 07:10:41.878354 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:10:41.878521 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:10:41.902744 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:10:41.929209 sh[566]: Success Aug 13 07:10:41.952817 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:10:42.047983 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:10:42.055532 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:10:42.080823 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:10:42.123634 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:10:42.123730 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:10:42.123756 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:10:42.133064 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:10:42.145721 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:10:42.174528 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 07:10:42.180764 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:10:42.181750 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:10:42.188773 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:10:42.237302 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:10:42.237363 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:10:42.237390 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:10:42.259842 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:10:42.259940 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:10:42.271805 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:10:42.293883 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:10:42.298940 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:10:42.325811 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:10:42.472058 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:10:42.484438 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:10:42.541189 ignition[629]: Ignition 2.19.0 Aug 13 07:10:42.541210 ignition[629]: Stage: fetch-offline Aug 13 07:10:42.543937 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:10:42.541279 ignition[629]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:10:42.541296 ignition[629]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:10:42.566191 systemd-networkd[749]: lo: Link UP Aug 13 07:10:42.541472 ignition[629]: parsed url from cmdline: "" Aug 13 07:10:42.566196 systemd-networkd[749]: lo: Gained carrier Aug 13 07:10:42.541479 ignition[629]: no config URL provided Aug 13 07:10:42.567956 systemd-networkd[749]: Enumeration completed Aug 13 07:10:42.541504 ignition[629]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:10:42.568625 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:10:42.541519 ignition[629]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:10:42.568633 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:10:42.541531 ignition[629]: failed to fetch config: resource requires networking Aug 13 07:10:42.570681 systemd-networkd[749]: eth0: Link UP Aug 13 07:10:42.541811 ignition[629]: Ignition finished successfully Aug 13 07:10:42.570687 systemd-networkd[749]: eth0: Gained carrier Aug 13 07:10:42.684644 ignition[758]: Ignition 2.19.0 Aug 13 07:10:42.570698 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:10:42.684654 ignition[758]: Stage: fetch Aug 13 07:10:42.586961 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:10:42.684869 ignition[758]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:10:42.592633 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.53/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 13 07:10:42.684882 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:10:42.606979 systemd[1]: Reached target network.target - Network. Aug 13 07:10:42.685004 ignition[758]: parsed url from cmdline: "" Aug 13 07:10:42.628744 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:10:42.685011 ignition[758]: no config URL provided Aug 13 07:10:42.695634 unknown[758]: fetched base config from "system" Aug 13 07:10:42.685020 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:10:42.695648 unknown[758]: fetched base config from "system" Aug 13 07:10:42.685033 ignition[758]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:10:42.695659 unknown[758]: fetched user config from "gcp" Aug 13 07:10:42.685055 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Aug 13 07:10:42.698413 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:10:42.688026 ignition[758]: GET result: OK Aug 13 07:10:42.723786 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:10:42.688138 ignition[758]: parsing config with SHA512: fae5da6016ff47694795ffe5d2708eedb6e9c1790083b07ee54eb604a216b7204011e46c337c0c0a3ceb4871a6d3b1ae984db83bd8d6818e5f30277f415f536f Aug 13 07:10:42.775188 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:10:42.696360 ignition[758]: fetch: fetch complete Aug 13 07:10:42.785827 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:10:42.696373 ignition[758]: fetch: fetch passed Aug 13 07:10:42.827020 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:10:42.696433 ignition[758]: Ignition finished successfully Aug 13 07:10:42.847576 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:10:42.772306 ignition[765]: Ignition 2.19.0 Aug 13 07:10:42.865697 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:10:42.772316 ignition[765]: Stage: kargs Aug 13 07:10:42.883781 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:10:42.772546 ignition[765]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:10:42.897734 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:10:42.772565 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:10:42.897922 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:10:42.773808 ignition[765]: kargs: kargs passed Aug 13 07:10:42.929749 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:10:42.773874 ignition[765]: Ignition finished successfully Aug 13 07:10:42.820842 ignition[771]: Ignition 2.19.0 Aug 13 07:10:42.820852 ignition[771]: Stage: disks Aug 13 07:10:42.821060 ignition[771]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:10:42.821073 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:10:42.822225 ignition[771]: disks: disks passed Aug 13 07:10:42.822290 ignition[771]: Ignition finished successfully Aug 13 07:10:42.976293 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 13 07:10:43.149711 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:10:43.154659 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:10:43.297548 kernel: EXT4-fs (sda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:10:43.298384 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:10:43.313426 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:10:43.338649 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:10:43.348457 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:10:43.371260 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:10:43.412445 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (788) Aug 13 07:10:43.412486 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:10:43.412532 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:10:43.412557 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:10:43.371353 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:10:43.465811 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:10:43.465849 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:10:43.371397 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:10:43.400803 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:10:43.448479 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:10:43.479752 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:10:43.667405 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:10:43.677722 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:10:43.687730 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:10:43.697665 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:10:43.846259 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:10:43.870682 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:10:43.884802 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:10:43.909265 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:10:43.927720 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:10:43.964920 ignition[900]: INFO : Ignition 2.19.0 Aug 13 07:10:43.964920 ignition[900]: INFO : Stage: mount Aug 13 07:10:43.964920 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:10:43.964920 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:10:43.964646 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:10:44.036750 ignition[900]: INFO : mount: mount passed Aug 13 07:10:44.036750 ignition[900]: INFO : Ignition finished successfully Aug 13 07:10:43.975147 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:10:43.998683 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:10:44.279879 systemd-networkd[749]: eth0: Gained IPv6LL Aug 13 07:10:44.304777 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:10:44.353566 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (912) Aug 13 07:10:44.371629 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:10:44.371757 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:10:44.371808 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:10:44.395162 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:10:44.395279 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:10:44.398778 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:10:44.441067 ignition[929]: INFO : Ignition 2.19.0 Aug 13 07:10:44.441067 ignition[929]: INFO : Stage: files Aug 13 07:10:44.456673 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:10:44.456673 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:10:44.456673 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:10:44.456673 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:10:44.456673 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:10:44.456673 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:10:44.456673 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:10:44.456673 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:10:44.454059 unknown[929]: wrote ssh authorized keys file for user: core Aug 13 07:10:44.557773 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 07:10:44.557773 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 07:10:44.592709 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:10:44.746201 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 07:10:45.263517 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:10:45.603695 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:10:45.603695 ignition[929]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 07:10:45.643751 ignition[929]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:10:45.643751 ignition[929]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:10:45.643751 ignition[929]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 07:10:45.643751 ignition[929]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:10:45.643751 ignition[929]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:10:45.643751 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:10:45.643751 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:10:45.643751 ignition[929]: INFO : files: files passed Aug 13 07:10:45.643751 ignition[929]: INFO : Ignition finished successfully Aug 13 07:10:45.609712 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:10:45.638790 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:10:45.648735 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:10:45.696301 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:10:45.855735 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:10:45.855735 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:10:45.696424 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:10:45.894719 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:10:45.747238 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:10:45.788149 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:10:45.810743 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:10:45.897740 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:10:45.897905 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:10:45.920649 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:10:45.940843 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:10:45.960973 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:10:45.967846 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:10:46.062180 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:10:46.067803 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:10:46.122239 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:10:46.133929 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:10:46.155002 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:10:46.172891 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:10:46.173106 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:10:46.199953 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:10:46.220888 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:10:46.238968 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:10:46.256891 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:10:46.277945 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:10:46.298963 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:10:46.318896 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:10:46.341088 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:10:46.361903 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:10:46.381957 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:10:46.392057 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:10:46.392266 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:10:46.423065 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:10:46.433079 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:10:46.451039 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:10:46.451202 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:10:46.471011 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:10:46.471207 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:10:46.511024 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:10:46.511249 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:10:46.528097 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:10:46.528286 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:10:46.554948 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:10:46.576966 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:10:46.612694 ignition[982]: INFO : Ignition 2.19.0 Aug 13 07:10:46.612694 ignition[982]: INFO : Stage: umount Aug 13 07:10:46.612694 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:10:46.612694 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:10:46.612694 ignition[982]: INFO : umount: umount passed Aug 13 07:10:46.612694 ignition[982]: INFO : Ignition finished successfully Aug 13 07:10:46.577240 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:10:46.631187 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:10:46.637890 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:10:46.638144 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:10:46.708057 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:10:46.708279 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:10:46.742834 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:10:46.743973 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:10:46.744104 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:10:46.759433 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:10:46.759606 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:10:46.781333 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:10:46.781478 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:10:46.790227 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:10:46.790300 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:10:46.816053 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:10:46.816144 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:10:46.835051 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:10:46.835144 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:10:46.860008 systemd[1]: Stopped target network.target - Network. Aug 13 07:10:46.868952 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:10:46.869056 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:10:46.897935 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:10:46.905931 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:10:46.909618 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:10:46.920973 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:10:46.946853 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:10:46.965929 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:10:46.966017 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:10:46.983960 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:10:46.984050 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:10:46.993044 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:10:46.993145 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:10:47.018953 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:10:47.019054 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:10:47.038946 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:10:47.039049 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:10:47.066242 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:10:47.072613 systemd-networkd[749]: eth0: DHCPv6 lease lost Aug 13 07:10:47.086004 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:10:47.109371 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:10:47.109576 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:10:47.129398 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:10:47.129681 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:10:47.137889 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:10:47.137952 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:10:47.159688 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:10:47.187684 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:10:47.187913 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:10:47.197986 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:10:47.198070 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:10:47.225935 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:10:47.226036 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:10:47.243948 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:10:47.244059 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:10:47.265126 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:10:47.298377 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:10:47.298629 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:10:47.328158 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:10:47.328317 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:10:47.344873 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:10:47.344964 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:10:47.364227 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:10:47.364349 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:10:47.402039 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:10:47.728747 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 07:10:47.402286 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:10:47.449772 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:10:47.450038 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:10:47.499825 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:10:47.530734 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:10:47.531011 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:10:47.552959 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:10:47.553058 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:10:47.576619 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:10:47.576774 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:10:47.597329 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:10:47.597469 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:10:47.618413 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:10:47.644879 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:10:47.679823 systemd[1]: Switching root. Aug 13 07:10:47.878659 systemd-journald[183]: Journal stopped Aug 13 07:10:39.109109 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:10:39.109162 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:10:39.109181 kernel: BIOS-provided physical RAM map: Aug 13 07:10:39.109197 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Aug 13 07:10:39.109211 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Aug 13 07:10:39.109225 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Aug 13 07:10:39.109250 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Aug 13 07:10:39.109269 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Aug 13 07:10:39.109284 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Aug 13 07:10:39.109300 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Aug 13 07:10:39.109315 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Aug 13 07:10:39.109331 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Aug 13 07:10:39.109345 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Aug 13 07:10:39.109360 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Aug 13 07:10:39.109391 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Aug 13 07:10:39.109409 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Aug 13 07:10:39.109427 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Aug 13 07:10:39.109443 kernel: NX (Execute Disable) protection: active Aug 13 07:10:39.109461 kernel: APIC: Static calls initialized Aug 13 07:10:39.109477 kernel: efi: EFI v2.7 by EDK II Aug 13 07:10:39.109527 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Aug 13 07:10:39.109543 kernel: SMBIOS 2.4 present. Aug 13 07:10:39.109557 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Aug 13 07:10:39.109570 kernel: Hypervisor detected: KVM Aug 13 07:10:39.109589 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:10:39.109603 kernel: kvm-clock: using sched offset of 12575236028 cycles Aug 13 07:10:39.109617 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:10:39.109633 kernel: tsc: Detected 2299.998 MHz processor Aug 13 07:10:39.109649 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:10:39.109664 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:10:39.109679 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Aug 13 07:10:39.109694 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Aug 13 07:10:39.109710 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:10:39.109729 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Aug 13 07:10:39.109744 kernel: Using GB pages for direct mapping Aug 13 07:10:39.109760 kernel: Secure boot disabled Aug 13 07:10:39.109775 kernel: ACPI: Early table checksum verification disabled Aug 13 07:10:39.109791 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Aug 13 07:10:39.109806 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Aug 13 07:10:39.109823 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Aug 13 07:10:39.109845 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Aug 13 07:10:39.109865 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Aug 13 07:10:39.109881 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Aug 13 07:10:39.109898 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Aug 13 07:10:39.109915 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Aug 13 07:10:39.109932 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Aug 13 07:10:39.109949 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Aug 13 07:10:39.109969 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Aug 13 07:10:39.109986 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Aug 13 07:10:39.110003 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Aug 13 07:10:39.110020 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Aug 13 07:10:39.110037 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Aug 13 07:10:39.110053 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Aug 13 07:10:39.110071 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Aug 13 07:10:39.110087 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Aug 13 07:10:39.110104 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Aug 13 07:10:39.110125 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Aug 13 07:10:39.110151 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:10:39.110168 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:10:39.110185 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:10:39.110202 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Aug 13 07:10:39.110220 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Aug 13 07:10:39.110250 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Aug 13 07:10:39.110267 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Aug 13 07:10:39.110285 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Aug 13 07:10:39.110306 kernel: Zone ranges: Aug 13 07:10:39.110323 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:10:39.110341 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 07:10:39.110359 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Aug 13 07:10:39.110376 kernel: Movable zone start for each node Aug 13 07:10:39.110394 kernel: Early memory node ranges Aug 13 07:10:39.110411 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Aug 13 07:10:39.110428 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Aug 13 07:10:39.110446 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Aug 13 07:10:39.110467 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Aug 13 07:10:39.110484 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Aug 13 07:10:39.110525 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Aug 13 07:10:39.110542 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:10:39.110560 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Aug 13 07:10:39.110579 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Aug 13 07:10:39.110597 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 13 07:10:39.110616 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Aug 13 07:10:39.110634 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 13 07:10:39.110652 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:10:39.110676 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:10:39.110695 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:10:39.110713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:10:39.110731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:10:39.110750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:10:39.110768 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:10:39.110787 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:10:39.110804 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 07:10:39.110822 kernel: Booting paravirtualized kernel on KVM Aug 13 07:10:39.110843 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:10:39.110860 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:10:39.110877 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:10:39.110895 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:10:39.110911 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:10:39.110928 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:10:39.110946 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:10:39.110965 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:10:39.110986 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:10:39.111003 kernel: random: crng init done Aug 13 07:10:39.111019 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 07:10:39.111035 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:10:39.111051 kernel: Fallback order for Node 0: 0 Aug 13 07:10:39.111068 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Aug 13 07:10:39.111085 kernel: Policy zone: Normal Aug 13 07:10:39.111102 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:10:39.111119 kernel: software IO TLB: area num 2. Aug 13 07:10:39.111141 kernel: Memory: 7513396K/7860584K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 346928K reserved, 0K cma-reserved) Aug 13 07:10:39.111158 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:10:39.111175 kernel: Kernel/User page tables isolation: enabled Aug 13 07:10:39.111192 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:10:39.111209 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:10:39.111227 kernel: Dynamic Preempt: voluntary Aug 13 07:10:39.111259 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:10:39.111279 kernel: rcu: RCU event tracing is enabled. Aug 13 07:10:39.111315 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:10:39.111334 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:10:39.111352 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:10:39.111375 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:10:39.111394 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:10:39.111413 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:10:39.111432 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 07:10:39.111451 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:10:39.111470 kernel: Console: colour dummy device 80x25 Aug 13 07:10:39.111530 kernel: printk: console [ttyS0] enabled Aug 13 07:10:39.111550 kernel: ACPI: Core revision 20230628 Aug 13 07:10:39.111570 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:10:39.111586 kernel: x2apic enabled Aug 13 07:10:39.111606 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:10:39.111626 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Aug 13 07:10:39.111646 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 07:10:39.111664 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Aug 13 07:10:39.111689 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Aug 13 07:10:39.111708 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Aug 13 07:10:39.111728 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:10:39.111748 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 07:10:39.111767 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 07:10:39.111787 kernel: Spectre V2 : Mitigation: IBRS Aug 13 07:10:39.111807 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:10:39.111826 kernel: RETBleed: Mitigation: IBRS Aug 13 07:10:39.111846 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:10:39.111869 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Aug 13 07:10:39.111889 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:10:39.111908 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:10:39.111927 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:10:39.111946 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:10:39.111965 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:10:39.111984 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:10:39.112004 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:10:39.112023 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:10:39.112047 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:10:39.112067 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:10:39.112086 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:10:39.112102 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:10:39.112121 kernel: landlock: Up and running. Aug 13 07:10:39.112139 kernel: SELinux: Initializing. Aug 13 07:10:39.112158 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:10:39.112177 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:10:39.112196 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Aug 13 07:10:39.112219 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:10:39.112245 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:10:39.112265 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:10:39.112284 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Aug 13 07:10:39.112301 kernel: signal: max sigframe size: 1776 Aug 13 07:10:39.112316 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:10:39.112336 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:10:39.112356 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:10:39.112376 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:10:39.112400 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:10:39.112420 kernel: .... node #0, CPUs: #1 Aug 13 07:10:39.112441 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 13 07:10:39.112461 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 07:10:39.112477 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:10:39.112511 kernel: smpboot: Max logical packages: 1 Aug 13 07:10:39.112532 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Aug 13 07:10:39.112551 kernel: devtmpfs: initialized Aug 13 07:10:39.112576 kernel: x86/mm: Memory block size: 128MB Aug 13 07:10:39.112604 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Aug 13 07:10:39.112624 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:10:39.112642 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:10:39.112668 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:10:39.112688 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:10:39.112708 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:10:39.112729 kernel: audit: type=2000 audit(1755069037.721:1): state=initialized audit_enabled=0 res=1 Aug 13 07:10:39.112747 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:10:39.112771 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:10:39.112791 kernel: cpuidle: using governor menu Aug 13 07:10:39.112809 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:10:39.112829 kernel: dca service started, version 1.12.1 Aug 13 07:10:39.112849 kernel: PCI: Using configuration type 1 for base access Aug 13 07:10:39.112869 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:10:39.112889 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:10:39.112909 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:10:39.112929 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:10:39.112971 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:10:39.112991 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:10:39.113010 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:10:39.113029 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:10:39.113049 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 13 07:10:39.113069 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:10:39.113095 kernel: ACPI: Interpreter enabled Aug 13 07:10:39.113115 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 07:10:39.113135 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:10:39.113164 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:10:39.113185 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 07:10:39.113204 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 13 07:10:39.113224 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:10:39.113556 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:10:39.113765 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 07:10:39.113954 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 07:10:39.113985 kernel: PCI host bridge to bus 0000:00 Aug 13 07:10:39.114172 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:10:39.114359 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:10:39.114580 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:10:39.114752 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Aug 13 07:10:39.114927 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:10:39.115165 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 07:10:39.115415 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Aug 13 07:10:39.115659 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 07:10:39.115870 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 13 07:10:39.116080 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Aug 13 07:10:39.116290 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 13 07:10:39.116487 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Aug 13 07:10:39.116719 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:10:39.116913 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Aug 13 07:10:39.117107 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Aug 13 07:10:39.117318 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:10:39.117537 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Aug 13 07:10:39.117744 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Aug 13 07:10:39.117769 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:10:39.117795 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:10:39.117815 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:10:39.117836 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:10:39.117856 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 07:10:39.117876 kernel: iommu: Default domain type: Translated Aug 13 07:10:39.117896 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:10:39.117915 kernel: efivars: Registered efivars operations Aug 13 07:10:39.117935 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:10:39.117955 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:10:39.117978 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Aug 13 07:10:39.117997 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Aug 13 07:10:39.118023 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Aug 13 07:10:39.118041 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Aug 13 07:10:39.118061 kernel: vgaarb: loaded Aug 13 07:10:39.118081 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:10:39.118101 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:10:39.118120 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:10:39.118139 kernel: pnp: PnP ACPI init Aug 13 07:10:39.118164 kernel: pnp: PnP ACPI: found 7 devices Aug 13 07:10:39.118190 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:10:39.118219 kernel: NET: Registered PF_INET protocol family Aug 13 07:10:39.118245 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:10:39.118266 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 07:10:39.118286 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:10:39.118306 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:10:39.118326 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 07:10:39.118346 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 07:10:39.118369 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:10:39.118390 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:10:39.118410 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:10:39.118430 kernel: NET: Registered PF_XDP protocol family Aug 13 07:10:39.118691 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:10:39.118867 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:10:39.119041 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:10:39.119207 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Aug 13 07:10:39.119469 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:10:39.119550 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:10:39.119571 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 07:10:39.119590 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Aug 13 07:10:39.119610 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:10:39.119629 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 07:10:39.119649 kernel: clocksource: Switched to clocksource tsc Aug 13 07:10:39.119668 kernel: Initialise system trusted keyrings Aug 13 07:10:39.119694 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 07:10:39.119713 kernel: Key type asymmetric registered Aug 13 07:10:39.119733 kernel: Asymmetric key parser 'x509' registered Aug 13 07:10:39.119751 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:10:39.119771 kernel: io scheduler mq-deadline registered Aug 13 07:10:39.119790 kernel: io scheduler kyber registered Aug 13 07:10:39.119809 kernel: io scheduler bfq registered Aug 13 07:10:39.119828 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:10:39.119849 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 07:10:39.120051 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Aug 13 07:10:39.120075 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 13 07:10:39.120278 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Aug 13 07:10:39.120302 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 07:10:39.120485 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Aug 13 07:10:39.120532 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:10:39.120552 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:10:39.120572 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 07:10:39.120591 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Aug 13 07:10:39.120616 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Aug 13 07:10:39.120820 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Aug 13 07:10:39.120846 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:10:39.120865 kernel: i8042: Warning: Keylock active Aug 13 07:10:39.120889 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:10:39.120909 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:10:39.121103 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 13 07:10:39.121302 kernel: rtc_cmos 00:00: registered as rtc0 Aug 13 07:10:39.121475 kernel: rtc_cmos 00:00: setting system clock to 2025-08-13T07:10:38 UTC (1755069038) Aug 13 07:10:39.121687 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 13 07:10:39.121711 kernel: intel_pstate: CPU model not supported Aug 13 07:10:39.121736 kernel: pstore: Using crash dump compression: deflate Aug 13 07:10:39.121755 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:10:39.121775 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:10:39.121794 kernel: Segment Routing with IPv6 Aug 13 07:10:39.121813 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:10:39.121838 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:10:39.121857 kernel: Key type dns_resolver registered Aug 13 07:10:39.121876 kernel: IPI shorthand broadcast: enabled Aug 13 07:10:39.121895 kernel: sched_clock: Marking stable (892004847, 166930544)->(1140076422, -81141031) Aug 13 07:10:39.121915 kernel: registered taskstats version 1 Aug 13 07:10:39.121934 kernel: Loading compiled-in X.509 certificates Aug 13 07:10:39.121953 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:10:39.121972 kernel: Key type .fscrypt registered Aug 13 07:10:39.121990 kernel: Key type fscrypt-provisioning registered Aug 13 07:10:39.122014 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:10:39.122034 kernel: ima: No architecture policies found Aug 13 07:10:39.122053 kernel: clk: Disabling unused clocks Aug 13 07:10:39.122072 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:10:39.122091 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:10:39.122111 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:10:39.122130 kernel: Run /init as init process Aug 13 07:10:39.122149 kernel: with arguments: Aug 13 07:10:39.122172 kernel: /init Aug 13 07:10:39.122191 kernel: with environment: Aug 13 07:10:39.122209 kernel: HOME=/ Aug 13 07:10:39.122235 kernel: TERM=linux Aug 13 07:10:39.122260 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:10:39.122280 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:10:39.122303 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:10:39.122331 systemd[1]: Detected virtualization google. Aug 13 07:10:39.122357 systemd[1]: Detected architecture x86-64. Aug 13 07:10:39.122376 systemd[1]: Running in initrd. Aug 13 07:10:39.122400 systemd[1]: No hostname configured, using default hostname. Aug 13 07:10:39.122420 systemd[1]: Hostname set to . Aug 13 07:10:39.122446 systemd[1]: Initializing machine ID from random generator. Aug 13 07:10:39.122467 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:10:39.122487 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:10:39.122537 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:10:39.122563 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:10:39.122584 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:10:39.122604 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:10:39.122625 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:10:39.122648 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:10:39.122669 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:10:39.122693 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:10:39.122713 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:10:39.122734 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:10:39.122774 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:10:39.122799 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:10:39.122820 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:10:39.122840 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:10:39.122865 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:10:39.122887 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:10:39.122909 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:10:39.122930 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:10:39.122951 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:10:39.122978 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:10:39.122999 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:10:39.123020 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:10:39.123045 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:10:39.123066 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:10:39.123087 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:10:39.123108 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:10:39.123130 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:10:39.123151 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:10:39.123208 systemd-journald[183]: Collecting audit messages is disabled. Aug 13 07:10:39.123264 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:10:39.123285 systemd-journald[183]: Journal started Aug 13 07:10:39.123331 systemd-journald[183]: Runtime Journal (/run/log/journal/9cfe884d1d234d5d9a618f1ea6075346) is 8.0M, max 148.7M, 140.7M free. Aug 13 07:10:39.124083 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:10:39.129924 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 07:10:39.134040 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:10:39.140735 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:10:39.153785 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:10:39.157709 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:10:39.161983 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:10:39.172222 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:10:39.186036 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:10:39.197518 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:10:39.200405 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:10:39.201653 kernel: Bridge firewalling registered Aug 13 07:10:39.204538 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 07:10:39.204946 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:10:39.208567 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:10:39.212740 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:10:39.232082 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:10:39.242577 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:10:39.251046 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:10:39.256008 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:10:39.266735 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:10:39.289018 dracut-cmdline[213]: dracut-dracut-053 Aug 13 07:10:39.293826 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:10:39.320910 systemd-resolved[217]: Positive Trust Anchors: Aug 13 07:10:39.321485 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:10:39.321707 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:10:39.329522 systemd-resolved[217]: Defaulting to hostname 'linux'. Aug 13 07:10:39.334349 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:10:39.344285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:10:39.403543 kernel: SCSI subsystem initialized Aug 13 07:10:39.415520 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:10:39.428543 kernel: iscsi: registered transport (tcp) Aug 13 07:10:39.454538 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:10:39.454621 kernel: QLogic iSCSI HBA Driver Aug 13 07:10:39.508564 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:10:39.517743 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:10:39.547131 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:10:39.547220 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:10:39.547269 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:10:39.593533 kernel: raid6: avx2x4 gen() 17898 MB/s Aug 13 07:10:39.610527 kernel: raid6: avx2x2 gen() 17860 MB/s Aug 13 07:10:39.628142 kernel: raid6: avx2x1 gen() 13939 MB/s Aug 13 07:10:39.628199 kernel: raid6: using algorithm avx2x4 gen() 17898 MB/s Aug 13 07:10:39.646105 kernel: raid6: .... xor() 7499 MB/s, rmw enabled Aug 13 07:10:39.646165 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:10:39.670544 kernel: xor: automatically using best checksumming function avx Aug 13 07:10:39.846533 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:10:39.860971 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:10:39.868763 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:10:39.895546 systemd-udevd[400]: Using default interface naming scheme 'v255'. Aug 13 07:10:39.902504 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:10:39.915488 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:10:39.946865 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Aug 13 07:10:39.986613 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:10:39.998790 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:10:40.092555 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:10:40.109782 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:10:40.147157 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:10:40.159554 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:10:40.167618 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:10:40.175684 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:10:40.184735 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:10:40.202554 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:10:40.218529 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:10:40.220514 kernel: AES CTR mode by8 optimization enabled Aug 13 07:10:40.224751 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:10:40.364711 kernel: scsi host0: Virtio SCSI HBA Aug 13 07:10:40.380589 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Aug 13 07:10:40.401657 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:10:40.401846 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:10:40.407001 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:10:40.408763 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:10:40.409022 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:10:40.435446 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Aug 13 07:10:40.435825 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Aug 13 07:10:40.415536 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:10:40.444337 kernel: sd 0:0:1:0: [sda] Write Protect is off Aug 13 07:10:40.444981 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Aug 13 07:10:40.445506 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 07:10:40.432156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:10:40.451982 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:10:40.452039 kernel: GPT:17805311 != 25165823 Aug 13 07:10:40.452065 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:10:40.452870 kernel: GPT:17805311 != 25165823 Aug 13 07:10:40.454197 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:10:40.454242 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:10:40.455985 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Aug 13 07:10:40.472314 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:10:40.480673 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:10:40.530578 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (442) Aug 13 07:10:40.535817 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:10:40.567666 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (459) Aug 13 07:10:40.574903 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Aug 13 07:10:40.581567 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Aug 13 07:10:40.612330 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Aug 13 07:10:40.628656 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Aug 13 07:10:40.659191 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 13 07:10:40.680739 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:10:40.726671 disk-uuid[548]: Primary Header is updated. Aug 13 07:10:40.726671 disk-uuid[548]: Secondary Entries is updated. Aug 13 07:10:40.726671 disk-uuid[548]: Secondary Header is updated. Aug 13 07:10:40.757531 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:10:40.757577 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:10:40.786785 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:10:41.801542 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:10:41.801628 disk-uuid[549]: The operation has completed successfully. Aug 13 07:10:41.878354 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:10:41.878521 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:10:41.902744 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:10:41.929209 sh[566]: Success Aug 13 07:10:41.952817 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:10:42.047983 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:10:42.055532 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:10:42.080823 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:10:42.123634 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:10:42.123730 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:10:42.123756 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:10:42.133064 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:10:42.145721 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:10:42.174528 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 07:10:42.180764 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:10:42.181750 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:10:42.188773 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:10:42.237302 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:10:42.237363 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:10:42.237390 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:10:42.259842 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:10:42.259940 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:10:42.271805 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:10:42.293883 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:10:42.298940 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:10:42.325811 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:10:42.472058 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:10:42.484438 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:10:42.541189 ignition[629]: Ignition 2.19.0 Aug 13 07:10:42.541210 ignition[629]: Stage: fetch-offline Aug 13 07:10:42.543937 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:10:42.541279 ignition[629]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:10:42.541296 ignition[629]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:10:42.566191 systemd-networkd[749]: lo: Link UP Aug 13 07:10:42.541472 ignition[629]: parsed url from cmdline: "" Aug 13 07:10:42.566196 systemd-networkd[749]: lo: Gained carrier Aug 13 07:10:42.541479 ignition[629]: no config URL provided Aug 13 07:10:42.567956 systemd-networkd[749]: Enumeration completed Aug 13 07:10:42.541504 ignition[629]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:10:42.568625 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:10:42.541519 ignition[629]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:10:42.568633 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:10:42.541531 ignition[629]: failed to fetch config: resource requires networking Aug 13 07:10:42.570681 systemd-networkd[749]: eth0: Link UP Aug 13 07:10:42.541811 ignition[629]: Ignition finished successfully Aug 13 07:10:42.570687 systemd-networkd[749]: eth0: Gained carrier Aug 13 07:10:42.684644 ignition[758]: Ignition 2.19.0 Aug 13 07:10:42.570698 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:10:42.684654 ignition[758]: Stage: fetch Aug 13 07:10:42.586961 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:10:42.684869 ignition[758]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:10:42.592633 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.53/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 13 07:10:42.684882 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:10:42.606979 systemd[1]: Reached target network.target - Network. Aug 13 07:10:42.685004 ignition[758]: parsed url from cmdline: "" Aug 13 07:10:42.628744 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:10:42.685011 ignition[758]: no config URL provided Aug 13 07:10:42.695634 unknown[758]: fetched base config from "system" Aug 13 07:10:42.685020 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:10:42.695648 unknown[758]: fetched base config from "system" Aug 13 07:10:42.685033 ignition[758]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:10:42.695659 unknown[758]: fetched user config from "gcp" Aug 13 07:10:42.685055 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Aug 13 07:10:42.698413 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:10:42.688026 ignition[758]: GET result: OK Aug 13 07:10:42.723786 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:10:42.688138 ignition[758]: parsing config with SHA512: fae5da6016ff47694795ffe5d2708eedb6e9c1790083b07ee54eb604a216b7204011e46c337c0c0a3ceb4871a6d3b1ae984db83bd8d6818e5f30277f415f536f Aug 13 07:10:42.775188 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:10:42.696360 ignition[758]: fetch: fetch complete Aug 13 07:10:42.785827 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:10:42.696373 ignition[758]: fetch: fetch passed Aug 13 07:10:42.827020 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:10:42.696433 ignition[758]: Ignition finished successfully Aug 13 07:10:42.847576 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:10:42.772306 ignition[765]: Ignition 2.19.0 Aug 13 07:10:42.865697 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:10:42.772316 ignition[765]: Stage: kargs Aug 13 07:10:42.883781 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:10:42.772546 ignition[765]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:10:42.897734 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:10:42.772565 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:10:42.897922 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:10:42.773808 ignition[765]: kargs: kargs passed Aug 13 07:10:42.929749 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:10:42.773874 ignition[765]: Ignition finished successfully Aug 13 07:10:42.820842 ignition[771]: Ignition 2.19.0 Aug 13 07:10:42.820852 ignition[771]: Stage: disks Aug 13 07:10:42.821060 ignition[771]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:10:42.821073 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:10:42.822225 ignition[771]: disks: disks passed Aug 13 07:10:42.822290 ignition[771]: Ignition finished successfully Aug 13 07:10:42.976293 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 13 07:10:43.149711 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:10:43.154659 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:10:43.297548 kernel: EXT4-fs (sda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:10:43.298384 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:10:43.313426 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:10:43.338649 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:10:43.348457 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:10:43.371260 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:10:43.412445 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (788) Aug 13 07:10:43.412486 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:10:43.412532 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:10:43.412557 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:10:43.371353 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:10:43.465811 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:10:43.465849 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:10:43.371397 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:10:43.400803 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:10:43.448479 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:10:43.479752 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:10:43.667405 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:10:43.677722 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:10:43.687730 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:10:43.697665 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:10:43.846259 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:10:43.870682 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:10:43.884802 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:10:43.909265 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:10:43.927720 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:10:43.964920 ignition[900]: INFO : Ignition 2.19.0 Aug 13 07:10:43.964920 ignition[900]: INFO : Stage: mount Aug 13 07:10:43.964920 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:10:43.964920 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:10:43.964646 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:10:44.036750 ignition[900]: INFO : mount: mount passed Aug 13 07:10:44.036750 ignition[900]: INFO : Ignition finished successfully Aug 13 07:10:43.975147 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:10:43.998683 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:10:44.279879 systemd-networkd[749]: eth0: Gained IPv6LL Aug 13 07:10:44.304777 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:10:44.353566 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (912) Aug 13 07:10:44.371629 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:10:44.371757 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:10:44.371808 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:10:44.395162 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 07:10:44.395279 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:10:44.398778 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:10:44.441067 ignition[929]: INFO : Ignition 2.19.0 Aug 13 07:10:44.441067 ignition[929]: INFO : Stage: files Aug 13 07:10:44.456673 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:10:44.456673 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:10:44.456673 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:10:44.456673 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:10:44.456673 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:10:44.456673 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:10:44.456673 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:10:44.456673 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:10:44.454059 unknown[929]: wrote ssh authorized keys file for user: core Aug 13 07:10:44.557773 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 07:10:44.557773 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 07:10:44.592709 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:10:44.746201 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:10:44.763706 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 07:10:45.263517 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:10:45.603695 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:10:45.603695 ignition[929]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 07:10:45.643751 ignition[929]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:10:45.643751 ignition[929]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:10:45.643751 ignition[929]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 07:10:45.643751 ignition[929]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:10:45.643751 ignition[929]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:10:45.643751 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:10:45.643751 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:10:45.643751 ignition[929]: INFO : files: files passed Aug 13 07:10:45.643751 ignition[929]: INFO : Ignition finished successfully Aug 13 07:10:45.609712 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:10:45.638790 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:10:45.648735 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:10:45.696301 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:10:45.855735 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:10:45.855735 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:10:45.696424 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:10:45.894719 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:10:45.747238 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:10:45.788149 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:10:45.810743 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:10:45.897740 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:10:45.897905 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:10:45.920649 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:10:45.940843 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:10:45.960973 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:10:45.967846 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:10:46.062180 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:10:46.067803 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:10:46.122239 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:10:46.133929 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:10:46.155002 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:10:46.172891 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:10:46.173106 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:10:46.199953 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:10:46.220888 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:10:46.238968 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:10:46.256891 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:10:46.277945 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:10:46.298963 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:10:46.318896 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:10:46.341088 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:10:46.361903 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:10:46.381957 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:10:46.392057 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:10:46.392266 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:10:46.423065 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:10:46.433079 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:10:46.451039 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:10:46.451202 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:10:46.471011 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:10:46.471207 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:10:46.511024 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:10:46.511249 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:10:46.528097 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:10:46.528286 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:10:46.554948 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:10:46.576966 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:10:46.612694 ignition[982]: INFO : Ignition 2.19.0 Aug 13 07:10:46.612694 ignition[982]: INFO : Stage: umount Aug 13 07:10:46.612694 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:10:46.612694 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 07:10:46.612694 ignition[982]: INFO : umount: umount passed Aug 13 07:10:46.612694 ignition[982]: INFO : Ignition finished successfully Aug 13 07:10:46.577240 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:10:46.631187 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:10:46.637890 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:10:46.638144 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:10:46.708057 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:10:46.708279 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:10:46.742834 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:10:46.743973 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:10:46.744104 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:10:46.759433 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:10:46.759606 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:10:46.781333 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:10:46.781478 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:10:46.790227 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:10:46.790300 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:10:46.816053 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:10:46.816144 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:10:46.835051 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:10:46.835144 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:10:46.860008 systemd[1]: Stopped target network.target - Network. Aug 13 07:10:46.868952 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:10:46.869056 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:10:46.897935 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:10:46.905931 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:10:46.909618 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:10:46.920973 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:10:46.946853 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:10:46.965929 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:10:46.966017 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:10:46.983960 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:10:46.984050 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:10:46.993044 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:10:46.993145 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:10:47.018953 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:10:47.019054 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:10:47.038946 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:10:47.039049 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:10:47.066242 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:10:47.072613 systemd-networkd[749]: eth0: DHCPv6 lease lost Aug 13 07:10:47.086004 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:10:47.109371 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:10:47.109576 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:10:47.129398 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:10:47.129681 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:10:47.137889 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:10:47.137952 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:10:47.159688 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:10:47.187684 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:10:47.187913 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:10:47.197986 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:10:47.198070 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:10:47.225935 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:10:47.226036 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:10:47.243948 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:10:47.244059 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:10:47.265126 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:10:47.298377 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:10:47.298629 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:10:47.328158 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:10:47.328317 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:10:47.344873 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:10:47.344964 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:10:47.364227 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:10:47.364349 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:10:47.402039 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:10:47.728747 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 07:10:47.402286 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:10:47.449772 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:10:47.450038 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:10:47.499825 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:10:47.530734 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:10:47.531011 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:10:47.552959 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:10:47.553058 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:10:47.576619 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:10:47.576774 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:10:47.597329 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:10:47.597469 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:10:47.618413 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:10:47.644879 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:10:47.679823 systemd[1]: Switching root. Aug 13 07:10:47.878659 systemd-journald[183]: Journal stopped Aug 13 07:10:50.585241 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:10:50.585298 kernel: SELinux: policy capability open_perms=1 Aug 13 07:10:50.585324 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:10:50.585342 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:10:50.585364 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:10:50.585383 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:10:50.585404 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:10:50.585426 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:10:50.585444 kernel: audit: type=1403 audit(1755069048.387:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:10:50.585467 systemd[1]: Successfully loaded SELinux policy in 97.485ms. Aug 13 07:10:50.585503 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.256ms. Aug 13 07:10:50.585526 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:10:50.585547 systemd[1]: Detected virtualization google. Aug 13 07:10:50.585567 systemd[1]: Detected architecture x86-64. Aug 13 07:10:50.585594 systemd[1]: Detected first boot. Aug 13 07:10:50.585616 systemd[1]: Initializing machine ID from random generator. Aug 13 07:10:50.585637 zram_generator::config[1023]: No configuration found. Aug 13 07:10:50.585661 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:10:50.585682 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:10:50.585710 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:10:50.585733 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:10:50.585756 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:10:50.585778 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:10:50.585799 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:10:50.585821 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:10:50.585843 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:10:50.585870 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:10:50.585892 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:10:50.585913 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:10:50.585935 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:10:50.585957 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:10:50.585979 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:10:50.586000 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:10:50.586022 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:10:50.586048 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:10:50.586069 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:10:50.586090 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:10:50.586112 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:10:50.586134 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:10:50.586154 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:10:50.586191 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:10:50.586214 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:10:50.586237 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:10:50.586263 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:10:50.586286 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:10:50.586309 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:10:50.586331 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:10:50.586353 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:10:50.586376 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:10:50.586398 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:10:50.586426 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:10:50.586448 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:10:50.586471 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:10:50.586506 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:10:50.586530 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:10:50.586558 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:10:50.586580 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:10:50.586603 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:10:50.586627 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:10:50.586650 systemd[1]: Reached target machines.target - Containers. Aug 13 07:10:50.586672 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:10:50.586697 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:10:50.586721 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:10:50.586747 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:10:50.586770 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:10:50.586793 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:10:50.586815 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:10:50.586838 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:10:50.586860 kernel: fuse: init (API version 7.39) Aug 13 07:10:50.586881 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:10:50.586905 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:10:50.586932 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:10:50.586954 kernel: ACPI: bus type drm_connector registered Aug 13 07:10:50.586974 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:10:50.586996 kernel: loop: module loaded Aug 13 07:10:50.587018 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:10:50.587040 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:10:50.587063 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:10:50.587086 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:10:50.587108 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:10:50.587169 systemd-journald[1110]: Collecting audit messages is disabled. Aug 13 07:10:50.587222 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:10:50.587248 systemd-journald[1110]: Journal started Aug 13 07:10:50.587296 systemd-journald[1110]: Runtime Journal (/run/log/journal/88eb5eeac7e64caca0cbb367e715dfb3) is 8.0M, max 148.7M, 140.7M free. Aug 13 07:10:49.342442 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:10:49.366833 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 07:10:49.367449 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:10:50.626536 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:10:50.644515 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:10:50.644609 systemd[1]: Stopped verity-setup.service. Aug 13 07:10:50.673528 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:10:50.684535 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:10:50.695180 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:10:50.704955 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:10:50.715004 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:10:50.725998 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:10:50.735962 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:10:50.746063 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:10:50.756129 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:10:50.768099 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:10:50.780139 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:10:50.780415 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:10:50.792116 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:10:50.792364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:10:50.804142 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:10:50.804397 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:10:50.815113 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:10:50.815362 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:10:50.827103 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:10:50.827369 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:10:50.838096 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:10:50.838345 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:10:50.849281 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:10:50.860108 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:10:50.872119 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:10:50.884106 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:10:50.909702 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:10:50.932751 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:10:50.947682 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:10:50.957729 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:10:50.957805 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:10:50.969698 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:10:50.992838 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:10:51.009790 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:10:51.019877 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:10:51.029045 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:10:51.045380 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:10:51.056726 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:10:51.068672 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:10:51.081677 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:10:51.090229 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:10:51.110747 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:10:51.117670 systemd-journald[1110]: Time spent on flushing to /var/log/journal/88eb5eeac7e64caca0cbb367e715dfb3 is 123.576ms for 927 entries. Aug 13 07:10:51.117670 systemd-journald[1110]: System Journal (/var/log/journal/88eb5eeac7e64caca0cbb367e715dfb3) is 8.0M, max 584.8M, 576.8M free. Aug 13 07:10:51.287387 systemd-journald[1110]: Received client request to flush runtime journal. Aug 13 07:10:51.287457 kernel: loop0: detected capacity change from 0 to 140768 Aug 13 07:10:51.287486 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:10:51.137159 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:10:51.151924 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:10:51.173116 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:10:51.184874 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:10:51.198090 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:10:51.210456 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:10:51.230361 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:10:51.260167 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:10:51.278182 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:10:51.290136 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:10:51.322579 kernel: loop1: detected capacity change from 0 to 229808 Aug 13 07:10:51.318761 udevadm[1143]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 07:10:51.345322 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:10:51.346848 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:10:51.357803 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:10:51.380621 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:10:51.439531 kernel: loop2: detected capacity change from 0 to 142488 Aug 13 07:10:51.458797 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Aug 13 07:10:51.459596 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Aug 13 07:10:51.474256 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:10:51.570602 kernel: loop3: detected capacity change from 0 to 54824 Aug 13 07:10:51.647224 kernel: loop4: detected capacity change from 0 to 140768 Aug 13 07:10:51.702844 kernel: loop5: detected capacity change from 0 to 229808 Aug 13 07:10:51.751532 kernel: loop6: detected capacity change from 0 to 142488 Aug 13 07:10:51.821530 kernel: loop7: detected capacity change from 0 to 54824 Aug 13 07:10:51.850532 (sd-merge)[1166]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Aug 13 07:10:51.851967 (sd-merge)[1166]: Merged extensions into '/usr'. Aug 13 07:10:51.865789 systemd[1]: Reloading requested from client PID 1141 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:10:51.866228 systemd[1]: Reloading... Aug 13 07:10:52.046845 zram_generator::config[1189]: No configuration found. Aug 13 07:10:52.306936 ldconfig[1136]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:10:52.319689 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:10:52.423269 systemd[1]: Reloading finished in 555 ms. Aug 13 07:10:52.460992 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:10:52.471356 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:10:52.496817 systemd[1]: Starting ensure-sysext.service... Aug 13 07:10:52.513471 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:10:52.531694 systemd[1]: Reloading requested from client PID 1232 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:10:52.531716 systemd[1]: Reloading... Aug 13 07:10:52.562979 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:10:52.564347 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:10:52.566470 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:10:52.567240 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Aug 13 07:10:52.567560 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Aug 13 07:10:52.574237 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:10:52.574430 systemd-tmpfiles[1233]: Skipping /boot Aug 13 07:10:52.598261 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:10:52.599598 systemd-tmpfiles[1233]: Skipping /boot Aug 13 07:10:52.662602 zram_generator::config[1256]: No configuration found. Aug 13 07:10:52.807579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:10:52.873466 systemd[1]: Reloading finished in 341 ms. Aug 13 07:10:52.895285 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:10:52.912350 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:10:52.937815 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:10:52.954724 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:10:52.969787 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:10:52.988832 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:10:53.007874 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:10:53.029834 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:10:53.055679 augenrules[1322]: No rules Aug 13 07:10:53.057104 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:10:53.067842 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:10:53.093558 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:10:53.104812 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Aug 13 07:10:53.115169 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:10:53.131389 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:10:53.131784 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:10:53.140698 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:10:53.160722 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:10:53.178672 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:10:53.188855 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:10:53.196960 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:10:53.207667 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:10:53.209401 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:10:53.220466 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:10:53.232813 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:10:53.245554 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:10:53.247057 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:10:53.260588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:10:53.260882 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:10:53.273178 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:10:53.274595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:10:53.285448 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:10:53.318382 systemd[1]: Finished ensure-sysext.service. Aug 13 07:10:53.325426 systemd-resolved[1311]: Positive Trust Anchors: Aug 13 07:10:53.325942 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:10:53.326016 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:10:53.339036 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:10:53.339357 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:10:53.350441 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:10:53.351140 systemd-resolved[1311]: Defaulting to hostname 'linux'. Aug 13 07:10:53.372746 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:10:53.393763 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:10:53.413885 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:10:53.431752 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 13 07:10:53.442803 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:10:53.450731 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:10:53.460715 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:10:53.470675 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:10:53.470732 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:10:53.471384 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:10:53.482442 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:10:53.483885 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:10:53.496186 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:10:53.496431 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:10:53.507184 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:10:53.508146 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:10:53.520199 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:10:53.521542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:10:53.536169 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1344) Aug 13 07:10:53.567132 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 13 07:10:53.584619 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:10:53.622392 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:10:53.625543 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 07:10:53.642520 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Aug 13 07:10:53.658520 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:10:53.660813 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Aug 13 07:10:53.670532 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Aug 13 07:10:53.678705 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:10:53.679034 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:10:53.688946 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 13 07:10:53.716802 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:10:53.731520 kernel: ACPI: button: Sleep Button [SLPF] Aug 13 07:10:53.780524 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Aug 13 07:10:53.780783 systemd-networkd[1375]: lo: Link UP Aug 13 07:10:53.781251 systemd-networkd[1375]: lo: Gained carrier Aug 13 07:10:53.792026 systemd-networkd[1375]: Enumeration completed Aug 13 07:10:53.792399 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:10:53.792706 systemd[1]: Reached target network.target - Network. Aug 13 07:10:53.802750 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:10:53.804049 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:10:53.804057 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:10:53.811166 systemd-networkd[1375]: eth0: Link UP Aug 13 07:10:53.811180 systemd-networkd[1375]: eth0: Gained carrier Aug 13 07:10:53.811211 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:10:53.825402 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:10:53.825250 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:10:53.828649 systemd-networkd[1375]: eth0: DHCPv4 address 10.128.0.53/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 13 07:10:53.840588 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Aug 13 07:10:53.892861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:10:53.901671 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:10:53.933156 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:10:53.953407 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:10:53.976521 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:10:54.019450 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:10:54.021581 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:10:54.026159 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:10:54.046398 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:10:54.051176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:10:54.064245 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:10:54.074868 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:10:54.086807 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:10:54.098972 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:10:54.108924 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:10:54.120724 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:10:54.132711 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:10:54.132775 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:10:54.141698 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:10:54.152939 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:10:54.164573 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:10:54.179143 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:10:54.189819 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:10:54.201993 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:10:54.212661 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:10:54.222680 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:10:54.231773 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:10:54.231845 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:10:54.238667 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:10:54.261584 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:10:54.281840 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:10:54.314440 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:10:54.330811 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:10:54.340709 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:10:54.351796 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:10:54.353548 jq[1423]: false Aug 13 07:10:54.366758 systemd[1]: Started ntpd.service - Network Time Service. Aug 13 07:10:54.379653 coreos-metadata[1421]: Aug 13 07:10:54.379 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Aug 13 07:10:54.385656 coreos-metadata[1421]: Aug 13 07:10:54.381 INFO Fetch successful Aug 13 07:10:54.385656 coreos-metadata[1421]: Aug 13 07:10:54.381 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Aug 13 07:10:54.385656 coreos-metadata[1421]: Aug 13 07:10:54.382 INFO Fetch successful Aug 13 07:10:54.385656 coreos-metadata[1421]: Aug 13 07:10:54.382 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Aug 13 07:10:54.385656 coreos-metadata[1421]: Aug 13 07:10:54.382 INFO Fetch successful Aug 13 07:10:54.385656 coreos-metadata[1421]: Aug 13 07:10:54.382 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Aug 13 07:10:54.385656 coreos-metadata[1421]: Aug 13 07:10:54.383 INFO Fetch successful Aug 13 07:10:54.386688 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:10:54.402794 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:10:54.418790 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:10:54.440781 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:10:54.445379 dbus-daemon[1422]: [system] SELinux support is enabled Aug 13 07:10:54.451280 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Aug 13 07:10:54.452754 dbus-daemon[1422]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1375 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 07:10:54.453731 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:10:54.457645 extend-filesystems[1426]: Found loop4 Aug 13 07:10:54.457645 extend-filesystems[1426]: Found loop5 Aug 13 07:10:54.485809 extend-filesystems[1426]: Found loop6 Aug 13 07:10:54.485809 extend-filesystems[1426]: Found loop7 Aug 13 07:10:54.485809 extend-filesystems[1426]: Found sda Aug 13 07:10:54.485809 extend-filesystems[1426]: Found sda1 Aug 13 07:10:54.485809 extend-filesystems[1426]: Found sda2 Aug 13 07:10:54.485809 extend-filesystems[1426]: Found sda3 Aug 13 07:10:54.485809 extend-filesystems[1426]: Found usr Aug 13 07:10:54.485809 extend-filesystems[1426]: Found sda4 Aug 13 07:10:54.485809 extend-filesystems[1426]: Found sda6 Aug 13 07:10:54.485809 extend-filesystems[1426]: Found sda7 Aug 13 07:10:54.485809 extend-filesystems[1426]: Found sda9 Aug 13 07:10:54.485809 extend-filesystems[1426]: Checking size of /dev/sda9 Aug 13 07:10:54.638965 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Aug 13 07:10:54.649667 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Aug 13 07:10:54.458776 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:10 UTC 2025 (1): Starting Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: ---------------------------------------------------- Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: ntp-4 is maintained by Network Time Foundation, Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: corporation. Support and training for ntp-4 are Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: available at https://www.nwtime.org/support Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: ---------------------------------------------------- Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: proto: precision = 0.104 usec (-23) Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: basedate set to 2025-07-31 Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: gps base set to 2025-08-03 (week 2378) Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: Listen normally on 3 eth0 10.128.0.53:123 Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: Listen normally on 4 lo [::1]:123 Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: bind(21) AF_INET6 fe80::4001:aff:fe80:35%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:35%2#123 Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:35%2 Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: Listening on routing socket on fd #21 for interface updates Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:10:54.649890 ntpd[1428]: 13 Aug 07:10:54 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:10:54.464726 ntpd[1428]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:10 UTC 2025 (1): Starting Aug 13 07:10:54.655989 extend-filesystems[1426]: Resized partition /dev/sda9 Aug 13 07:10:54.675697 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1364) Aug 13 07:10:54.472725 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:10:54.464760 ntpd[1428]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 07:10:54.676253 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:10:54.499034 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:10:54.685020 update_engine[1442]: I20250813 07:10:54.642174 1442 main.cc:92] Flatcar Update Engine starting Aug 13 07:10:54.685020 update_engine[1442]: I20250813 07:10:54.646168 1442 update_check_scheduler.cc:74] Next update check in 3m44s Aug 13 07:10:54.464776 ntpd[1428]: ---------------------------------------------------- Aug 13 07:10:54.685723 extend-filesystems[1452]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 07:10:54.685723 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 2 Aug 13 07:10:54.685723 extend-filesystems[1452]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Aug 13 07:10:54.542138 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:10:54.751062 jq[1445]: true Aug 13 07:10:54.464791 ntpd[1428]: ntp-4 is maintained by Network Time Foundation, Aug 13 07:10:54.751568 extend-filesystems[1426]: Resized filesystem in /dev/sda9 Aug 13 07:10:54.542929 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:10:54.464806 ntpd[1428]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 07:10:54.543441 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:10:54.762371 tar[1456]: linux-amd64/LICENSE Aug 13 07:10:54.762371 tar[1456]: linux-amd64/helm Aug 13 07:10:54.464820 ntpd[1428]: corporation. Support and training for ntp-4 are Aug 13 07:10:54.544448 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:10:54.464834 ntpd[1428]: available at https://www.nwtime.org/support Aug 13 07:10:54.561080 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:10:54.464849 ntpd[1428]: ---------------------------------------------------- Aug 13 07:10:54.561364 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:10:54.468984 ntpd[1428]: proto: precision = 0.104 usec (-23) Aug 13 07:10:54.598061 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:10:54.469411 ntpd[1428]: basedate set to 2025-07-31 Aug 13 07:10:54.598132 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:10:54.469431 ntpd[1428]: gps base set to 2025-08-03 (week 2378) Aug 13 07:10:54.631753 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:10:54.483153 ntpd[1428]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 07:10:54.631812 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:10:54.483231 ntpd[1428]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 07:10:54.678996 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:10:54.485965 ntpd[1428]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 07:10:54.695461 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:10:54.486607 ntpd[1428]: Listen normally on 3 eth0 10.128.0.53:123 Aug 13 07:10:54.696922 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:10:54.486697 ntpd[1428]: Listen normally on 4 lo [::1]:123 Aug 13 07:10:54.719450 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:10:54.486780 ntpd[1428]: bind(21) AF_INET6 fe80::4001:aff:fe80:35%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 07:10:54.719480 systemd-logind[1439]: Watching system buttons on /dev/input/event2 (Sleep Button) Aug 13 07:10:54.486812 ntpd[1428]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:35%2#123 Aug 13 07:10:54.719809 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:10:54.486836 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:35%2 Aug 13 07:10:54.721294 systemd-logind[1439]: New seat seat0. Aug 13 07:10:54.486891 ntpd[1428]: Listening on routing socket on fd #21 for interface updates Aug 13 07:10:54.727054 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:10:54.494710 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:10:54.759145 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:10:54.494752 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:10:54.767454 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 07:10:54.598328 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 07:10:54.789651 jq[1465]: true Aug 13 07:10:54.787995 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:10:54.812325 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:10:54.891723 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:10:54.902694 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:10:55.049801 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:10:55.049611 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:10:55.077901 systemd[1]: Starting sshkeys.service... Aug 13 07:10:55.136012 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 07:10:55.155714 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 07:10:55.257619 coreos-metadata[1497]: Aug 13 07:10:55.257 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Aug 13 07:10:55.262224 coreos-metadata[1497]: Aug 13 07:10:55.258 INFO Fetch failed with 404: resource not found Aug 13 07:10:55.262224 coreos-metadata[1497]: Aug 13 07:10:55.258 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Aug 13 07:10:55.262224 coreos-metadata[1497]: Aug 13 07:10:55.259 INFO Fetch successful Aug 13 07:10:55.263840 coreos-metadata[1497]: Aug 13 07:10:55.263 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Aug 13 07:10:55.263840 coreos-metadata[1497]: Aug 13 07:10:55.263 INFO Fetch failed with 404: resource not found Aug 13 07:10:55.263840 coreos-metadata[1497]: Aug 13 07:10:55.263 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Aug 13 07:10:55.264266 coreos-metadata[1497]: Aug 13 07:10:55.264 INFO Fetch failed with 404: resource not found Aug 13 07:10:55.264266 coreos-metadata[1497]: Aug 13 07:10:55.264 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Aug 13 07:10:55.264942 coreos-metadata[1497]: Aug 13 07:10:55.264 INFO Fetch successful Aug 13 07:10:55.272984 unknown[1497]: wrote ssh authorized keys file for user: core Aug 13 07:10:55.301289 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 07:10:55.301545 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 07:10:55.303471 dbus-daemon[1422]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1469 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 07:10:55.327597 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 07:10:55.338433 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:10:55.366215 update-ssh-keys[1507]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:10:55.367181 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 07:10:55.386589 systemd[1]: Finished sshkeys.service. Aug 13 07:10:55.452218 polkitd[1508]: Started polkitd version 121 Aug 13 07:10:55.465306 ntpd[1428]: bind(24) AF_INET6 fe80::4001:aff:fe80:35%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 07:10:55.465878 ntpd[1428]: 13 Aug 07:10:55 ntpd[1428]: bind(24) AF_INET6 fe80::4001:aff:fe80:35%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 07:10:55.465878 ntpd[1428]: 13 Aug 07:10:55 ntpd[1428]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:35%2#123 Aug 13 07:10:55.465878 ntpd[1428]: 13 Aug 07:10:55 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:35%2 Aug 13 07:10:55.465358 ntpd[1428]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:35%2#123 Aug 13 07:10:55.465380 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:35%2 Aug 13 07:10:55.473395 polkitd[1508]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 07:10:55.481113 polkitd[1508]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 07:10:55.485548 polkitd[1508]: Finished loading, compiling and executing 2 rules Aug 13 07:10:55.486470 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 07:10:55.486739 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 07:10:55.488530 polkitd[1508]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 07:10:55.553239 systemd-hostnamed[1469]: Hostname set to (transient) Aug 13 07:10:55.555082 systemd-resolved[1311]: System hostname changed to 'ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal'. Aug 13 07:10:55.672309 systemd-networkd[1375]: eth0: Gained IPv6LL Aug 13 07:10:55.680983 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:10:55.683132 containerd[1466]: time="2025-08-13T07:10:55.681755008Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:10:55.693284 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:10:55.701820 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:10:55.711864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:10:55.729934 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:10:55.745945 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Aug 13 07:10:55.790139 init.sh[1529]: + '[' -e /etc/default/instance_configs.cfg.template ']' Aug 13 07:10:55.790804 init.sh[1529]: + echo -e '[InstanceSetup]\nset_host_keys = false' Aug 13 07:10:55.792259 init.sh[1529]: + /usr/bin/google_instance_setup Aug 13 07:10:55.796639 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:10:55.811988 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:10:55.830964 systemd[1]: Started sshd@0-10.128.0.53:22-139.178.68.195:53720.service - OpenSSH per-connection server daemon (139.178.68.195:53720). Aug 13 07:10:55.843765 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:10:55.846360 containerd[1466]: time="2025-08-13T07:10:55.846258768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:10:55.853650 containerd[1466]: time="2025-08-13T07:10:55.853588692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:10:55.855820 containerd[1466]: time="2025-08-13T07:10:55.853812832Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:10:55.856025 containerd[1466]: time="2025-08-13T07:10:55.855992069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:10:55.856335 containerd[1466]: time="2025-08-13T07:10:55.856308587Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:10:55.856439 containerd[1466]: time="2025-08-13T07:10:55.856420140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:10:55.856695 containerd[1466]: time="2025-08-13T07:10:55.856665946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:10:55.856808 containerd[1466]: time="2025-08-13T07:10:55.856787773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:10:55.857195 containerd[1466]: time="2025-08-13T07:10:55.857157862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:10:55.857327 containerd[1466]: time="2025-08-13T07:10:55.857304869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:10:55.857426 containerd[1466]: time="2025-08-13T07:10:55.857401304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:10:55.857546 containerd[1466]: time="2025-08-13T07:10:55.857523958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:10:55.857755 containerd[1466]: time="2025-08-13T07:10:55.857732771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:10:55.858178 containerd[1466]: time="2025-08-13T07:10:55.858153395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:10:55.858595 containerd[1466]: time="2025-08-13T07:10:55.858561977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:10:55.858750 containerd[1466]: time="2025-08-13T07:10:55.858725075Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:10:55.859042 containerd[1466]: time="2025-08-13T07:10:55.859018169Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:10:55.859244 containerd[1466]: time="2025-08-13T07:10:55.859222390Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:10:55.875168 containerd[1466]: time="2025-08-13T07:10:55.873102111Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:10:55.875168 containerd[1466]: time="2025-08-13T07:10:55.873281738Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:10:55.875168 containerd[1466]: time="2025-08-13T07:10:55.873320973Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:10:55.875168 containerd[1466]: time="2025-08-13T07:10:55.873349313Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:10:55.875168 containerd[1466]: time="2025-08-13T07:10:55.873375073Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:10:55.875168 containerd[1466]: time="2025-08-13T07:10:55.873623183Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:10:55.881647 containerd[1466]: time="2025-08-13T07:10:55.881356404Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:10:55.884630 containerd[1466]: time="2025-08-13T07:10:55.884574614Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:10:55.884743 containerd[1466]: time="2025-08-13T07:10:55.884645813Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:10:55.884743 containerd[1466]: time="2025-08-13T07:10:55.884678079Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:10:55.884743 containerd[1466]: time="2025-08-13T07:10:55.884710596Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:10:55.884881 containerd[1466]: time="2025-08-13T07:10:55.884736057Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:10:55.884881 containerd[1466]: time="2025-08-13T07:10:55.884764566Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:10:55.884881 containerd[1466]: time="2025-08-13T07:10:55.884817561Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:10:55.884881 containerd[1466]: time="2025-08-13T07:10:55.884851571Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:10:55.885066 containerd[1466]: time="2025-08-13T07:10:55.884884294Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:10:55.885066 containerd[1466]: time="2025-08-13T07:10:55.884915146Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:10:55.885066 containerd[1466]: time="2025-08-13T07:10:55.884944149Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:10:55.885066 containerd[1466]: time="2025-08-13T07:10:55.884985345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.885066 containerd[1466]: time="2025-08-13T07:10:55.885026306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.885066 containerd[1466]: time="2025-08-13T07:10:55.885051782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.885332 containerd[1466]: time="2025-08-13T07:10:55.885095216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.885332 containerd[1466]: time="2025-08-13T07:10:55.885127655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.885332 containerd[1466]: time="2025-08-13T07:10:55.885157878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.885332 containerd[1466]: time="2025-08-13T07:10:55.885187437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.885332 containerd[1466]: time="2025-08-13T07:10:55.885213806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.885332 containerd[1466]: time="2025-08-13T07:10:55.885244581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.885332 containerd[1466]: time="2025-08-13T07:10:55.885277670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.885332 containerd[1466]: time="2025-08-13T07:10:55.885307121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.885700 containerd[1466]: time="2025-08-13T07:10:55.885337398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.885700 containerd[1466]: time="2025-08-13T07:10:55.885367245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.885700 containerd[1466]: time="2025-08-13T07:10:55.885401344Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:10:55.885700 containerd[1466]: time="2025-08-13T07:10:55.885448864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.885700 containerd[1466]: time="2025-08-13T07:10:55.885480175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.885700 containerd[1466]: time="2025-08-13T07:10:55.885529836Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:10:55.885700 containerd[1466]: time="2025-08-13T07:10:55.885604582Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:10:55.885700 containerd[1466]: time="2025-08-13T07:10:55.885644623Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:10:55.885700 containerd[1466]: time="2025-08-13T07:10:55.885673277Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:10:55.886086 containerd[1466]: time="2025-08-13T07:10:55.885702662Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:10:55.886086 containerd[1466]: time="2025-08-13T07:10:55.885722966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.886086 containerd[1466]: time="2025-08-13T07:10:55.885751656Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:10:55.886086 containerd[1466]: time="2025-08-13T07:10:55.885777306Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:10:55.886086 containerd[1466]: time="2025-08-13T07:10:55.885796945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:10:55.891553 containerd[1466]: time="2025-08-13T07:10:55.886326506Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:10:55.891553 containerd[1466]: time="2025-08-13T07:10:55.886446401Z" level=info msg="Connect containerd service" Aug 13 07:10:55.891553 containerd[1466]: time="2025-08-13T07:10:55.887998524Z" level=info msg="using legacy CRI server" Aug 13 07:10:55.891553 containerd[1466]: time="2025-08-13T07:10:55.888037078Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:10:55.891553 containerd[1466]: time="2025-08-13T07:10:55.888227417Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:10:55.891553 containerd[1466]: time="2025-08-13T07:10:55.890735307Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:10:55.897201 containerd[1466]: time="2025-08-13T07:10:55.893560338Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:10:55.897201 containerd[1466]: time="2025-08-13T07:10:55.893645130Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:10:55.897201 containerd[1466]: time="2025-08-13T07:10:55.893699446Z" level=info msg="Start subscribing containerd event" Aug 13 07:10:55.897201 containerd[1466]: time="2025-08-13T07:10:55.893755450Z" level=info msg="Start recovering state" Aug 13 07:10:55.897201 containerd[1466]: time="2025-08-13T07:10:55.893853144Z" level=info msg="Start event monitor" Aug 13 07:10:55.897201 containerd[1466]: time="2025-08-13T07:10:55.893882164Z" level=info msg="Start snapshots syncer" Aug 13 07:10:55.897201 containerd[1466]: time="2025-08-13T07:10:55.893896240Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:10:55.897201 containerd[1466]: time="2025-08-13T07:10:55.893910123Z" level=info msg="Start streaming server" Aug 13 07:10:55.897201 containerd[1466]: time="2025-08-13T07:10:55.895222636Z" level=info msg="containerd successfully booted in 0.224298s" Aug 13 07:10:55.894141 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:10:55.904654 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:10:55.904956 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:10:55.927096 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:10:55.978151 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:10:56.001887 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:10:56.021240 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:10:56.032031 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:10:56.320132 sshd[1542]: Accepted publickey for core from 139.178.68.195 port 53720 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:10:56.323423 sshd[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:56.341720 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:10:56.360508 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:10:56.382315 systemd-logind[1439]: New session 1 of user core. Aug 13 07:10:56.411234 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:10:56.437020 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:10:56.481761 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:10:56.494686 tar[1456]: linux-amd64/README.md Aug 13 07:10:56.526258 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:10:56.737311 systemd[1556]: Queued start job for default target default.target. Aug 13 07:10:56.744441 systemd[1556]: Created slice app.slice - User Application Slice. Aug 13 07:10:56.744986 systemd[1556]: Reached target paths.target - Paths. Aug 13 07:10:56.745018 systemd[1556]: Reached target timers.target - Timers. Aug 13 07:10:56.750038 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:10:56.785755 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:10:56.785980 systemd[1556]: Reached target sockets.target - Sockets. Aug 13 07:10:56.786007 systemd[1556]: Reached target basic.target - Basic System. Aug 13 07:10:56.786187 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:10:56.786520 systemd[1556]: Reached target default.target - Main User Target. Aug 13 07:10:56.786601 systemd[1556]: Startup finished in 282ms. Aug 13 07:10:56.804775 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:10:56.816776 instance-setup[1538]: INFO Running google_set_multiqueue. Aug 13 07:10:56.840401 instance-setup[1538]: INFO Set channels for eth0 to 2. Aug 13 07:10:56.845259 instance-setup[1538]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Aug 13 07:10:56.847800 instance-setup[1538]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Aug 13 07:10:56.848033 instance-setup[1538]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Aug 13 07:10:56.850537 instance-setup[1538]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Aug 13 07:10:56.850801 instance-setup[1538]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Aug 13 07:10:56.852656 instance-setup[1538]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Aug 13 07:10:56.852715 instance-setup[1538]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Aug 13 07:10:56.854538 instance-setup[1538]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Aug 13 07:10:56.864567 instance-setup[1538]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Aug 13 07:10:56.868667 instance-setup[1538]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Aug 13 07:10:56.871032 instance-setup[1538]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Aug 13 07:10:56.871093 instance-setup[1538]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Aug 13 07:10:56.895923 init.sh[1529]: + /usr/bin/google_metadata_script_runner --script-type startup Aug 13 07:10:57.064997 systemd[1]: Started sshd@1-10.128.0.53:22-139.178.68.195:53728.service - OpenSSH per-connection server daemon (139.178.68.195:53728). Aug 13 07:10:57.165891 startup-script[1598]: INFO Starting startup scripts. Aug 13 07:10:57.172204 startup-script[1598]: INFO No startup scripts found in metadata. Aug 13 07:10:57.172283 startup-script[1598]: INFO Finished running startup scripts. Aug 13 07:10:57.198590 init.sh[1529]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Aug 13 07:10:57.198590 init.sh[1529]: + daemon_pids=() Aug 13 07:10:57.200623 init.sh[1529]: + for d in accounts clock_skew network Aug 13 07:10:57.200623 init.sh[1529]: + daemon_pids+=($!) Aug 13 07:10:57.200623 init.sh[1529]: + for d in accounts clock_skew network Aug 13 07:10:57.200623 init.sh[1529]: + daemon_pids+=($!) Aug 13 07:10:57.200623 init.sh[1529]: + for d in accounts clock_skew network Aug 13 07:10:57.200959 init.sh[1605]: + /usr/bin/google_accounts_daemon Aug 13 07:10:57.201378 init.sh[1529]: + daemon_pids+=($!) Aug 13 07:10:57.201378 init.sh[1529]: + NOTIFY_SOCKET=/run/systemd/notify Aug 13 07:10:57.201378 init.sh[1529]: + /usr/bin/systemd-notify --ready Aug 13 07:10:57.203387 init.sh[1606]: + /usr/bin/google_clock_skew_daemon Aug 13 07:10:57.203835 init.sh[1607]: + /usr/bin/google_network_daemon Aug 13 07:10:57.228992 systemd[1]: Started oem-gce.service - GCE Linux Agent. Aug 13 07:10:57.241412 init.sh[1529]: + wait -n 1605 1606 1607 Aug 13 07:10:57.429219 sshd[1603]: Accepted publickey for core from 139.178.68.195 port 53728 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:10:57.432471 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:57.449629 systemd-logind[1439]: New session 2 of user core. Aug 13 07:10:57.451787 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:10:57.643043 google-clock-skew[1606]: INFO Starting Google Clock Skew daemon. Aug 13 07:10:57.651263 google-clock-skew[1606]: INFO Clock drift token has changed: 0. Aug 13 07:10:57.660729 google-networking[1607]: INFO Starting Google Networking daemon. Aug 13 07:10:57.668186 sshd[1603]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:57.676301 systemd[1]: sshd@1-10.128.0.53:22-139.178.68.195:53728.service: Deactivated successfully. Aug 13 07:10:57.683809 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:10:57.688737 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:10:57.691936 systemd-logind[1439]: Removed session 2. Aug 13 07:10:57.730249 systemd[1]: Started sshd@2-10.128.0.53:22-139.178.68.195:53734.service - OpenSSH per-connection server daemon (139.178.68.195:53734). Aug 13 07:10:57.732446 groupadd[1618]: group added to /etc/group: name=google-sudoers, GID=1000 Aug 13 07:10:57.743624 groupadd[1618]: group added to /etc/gshadow: name=google-sudoers Aug 13 07:10:57.804117 groupadd[1618]: new group: name=google-sudoers, GID=1000 Aug 13 07:10:57.838641 google-accounts[1605]: INFO Starting Google Accounts daemon. Aug 13 07:10:57.852791 google-accounts[1605]: WARNING OS Login not installed. Aug 13 07:10:57.854641 google-accounts[1605]: INFO Creating a new user account for 0. Aug 13 07:10:57.859067 init.sh[1632]: useradd: invalid user name '0': use --badname to ignore Aug 13 07:10:57.859669 google-accounts[1605]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Aug 13 07:10:58.000960 google-clock-skew[1606]: INFO Synced system time with hardware clock. Aug 13 07:10:58.000970 systemd-resolved[1311]: Clock change detected. Flushing caches. Aug 13 07:10:58.075206 sshd[1623]: Accepted publickey for core from 139.178.68.195 port 53734 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:10:58.077009 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:58.087000 systemd-logind[1439]: New session 3 of user core. Aug 13 07:10:58.089510 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:10:58.259393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:10:58.271338 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:10:58.276838 (kubelet)[1641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:10:58.282654 systemd[1]: Startup finished in 1.070s (kernel) + 9.602s (initrd) + 9.933s (userspace) = 20.606s. Aug 13 07:10:58.303458 sshd[1623]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:58.312119 systemd[1]: sshd@2-10.128.0.53:22-139.178.68.195:53734.service: Deactivated successfully. Aug 13 07:10:58.314805 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:10:58.317415 systemd-logind[1439]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:10:58.318960 systemd-logind[1439]: Removed session 3. Aug 13 07:10:58.513075 ntpd[1428]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:35%2]:123 Aug 13 07:10:58.513645 ntpd[1428]: 13 Aug 07:10:58 ntpd[1428]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:35%2]:123 Aug 13 07:10:59.283797 kubelet[1641]: E0813 07:10:59.283696 1641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:10:59.287789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:10:59.288071 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:10:59.288855 systemd[1]: kubelet.service: Consumed 1.360s CPU time. Aug 13 07:11:08.359051 systemd[1]: Started sshd@3-10.128.0.53:22-139.178.68.195:60862.service - OpenSSH per-connection server daemon (139.178.68.195:60862). Aug 13 07:11:08.659358 sshd[1655]: Accepted publickey for core from 139.178.68.195 port 60862 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:11:08.661743 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:08.669077 systemd-logind[1439]: New session 4 of user core. Aug 13 07:11:08.675478 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:11:08.878338 sshd[1655]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:08.883558 systemd[1]: sshd@3-10.128.0.53:22-139.178.68.195:60862.service: Deactivated successfully. Aug 13 07:11:08.886287 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:11:08.888492 systemd-logind[1439]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:11:08.890129 systemd-logind[1439]: Removed session 4. Aug 13 07:11:08.941683 systemd[1]: Started sshd@4-10.128.0.53:22-139.178.68.195:60872.service - OpenSSH per-connection server daemon (139.178.68.195:60872). Aug 13 07:11:09.223543 sshd[1662]: Accepted publickey for core from 139.178.68.195 port 60872 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:11:09.225350 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:09.231532 systemd-logind[1439]: New session 5 of user core. Aug 13 07:11:09.239490 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:11:09.431163 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:11:09.434440 sshd[1662]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:09.441590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:11:09.442253 systemd[1]: sshd@4-10.128.0.53:22-139.178.68.195:60872.service: Deactivated successfully. Aug 13 07:11:09.452285 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:11:09.455334 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:11:09.464953 systemd-logind[1439]: Removed session 5. Aug 13 07:11:09.496729 systemd[1]: Started sshd@5-10.128.0.53:22-139.178.68.195:60884.service - OpenSSH per-connection server daemon (139.178.68.195:60884). Aug 13 07:11:09.785863 sshd[1672]: Accepted publickey for core from 139.178.68.195 port 60884 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:11:09.788258 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:09.795258 systemd-logind[1439]: New session 6 of user core. Aug 13 07:11:09.806427 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:11:09.810417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:11:09.821714 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:11:09.874970 kubelet[1678]: E0813 07:11:09.874881 1678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:11:09.880261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:11:09.880521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:11:09.999867 sshd[1672]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:10.005511 systemd[1]: sshd@5-10.128.0.53:22-139.178.68.195:60884.service: Deactivated successfully. Aug 13 07:11:10.007832 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:11:10.008763 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:11:10.010338 systemd-logind[1439]: Removed session 6. Aug 13 07:11:10.054605 systemd[1]: Started sshd@6-10.128.0.53:22-139.178.68.195:45042.service - OpenSSH per-connection server daemon (139.178.68.195:45042). Aug 13 07:11:10.338719 sshd[1691]: Accepted publickey for core from 139.178.68.195 port 45042 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:11:10.340845 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:10.347736 systemd-logind[1439]: New session 7 of user core. Aug 13 07:11:10.357518 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:11:10.536714 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:11:10.537537 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:11:10.556506 sudo[1694]: pam_unix(sudo:session): session closed for user root Aug 13 07:11:10.600461 sshd[1691]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:10.606731 systemd[1]: sshd@6-10.128.0.53:22-139.178.68.195:45042.service: Deactivated successfully. Aug 13 07:11:10.609724 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:11:10.612267 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:11:10.614327 systemd-logind[1439]: Removed session 7. Aug 13 07:11:10.664677 systemd[1]: Started sshd@7-10.128.0.53:22-139.178.68.195:45048.service - OpenSSH per-connection server daemon (139.178.68.195:45048). Aug 13 07:11:10.951608 sshd[1699]: Accepted publickey for core from 139.178.68.195 port 45048 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:11:10.953889 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:10.961528 systemd-logind[1439]: New session 8 of user core. Aug 13 07:11:10.971554 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:11:11.135225 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:11:11.135771 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:11:11.141913 sudo[1703]: pam_unix(sudo:session): session closed for user root Aug 13 07:11:11.157824 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:11:11.158395 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:11:11.175637 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:11:11.187796 auditctl[1706]: No rules Aug 13 07:11:11.188489 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:11:11.188792 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:11:11.196754 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:11:11.235036 augenrules[1724]: No rules Aug 13 07:11:11.237810 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:11:11.239616 sudo[1702]: pam_unix(sudo:session): session closed for user root Aug 13 07:11:11.284404 sshd[1699]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:11.291355 systemd[1]: sshd@7-10.128.0.53:22-139.178.68.195:45048.service: Deactivated successfully. Aug 13 07:11:11.294011 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:11:11.294988 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:11:11.296663 systemd-logind[1439]: Removed session 8. Aug 13 07:11:11.344914 systemd[1]: Started sshd@8-10.128.0.53:22-139.178.68.195:45056.service - OpenSSH per-connection server daemon (139.178.68.195:45056). Aug 13 07:11:11.639022 sshd[1732]: Accepted publickey for core from 139.178.68.195 port 45056 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:11:11.641275 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:11.647433 systemd-logind[1439]: New session 9 of user core. Aug 13 07:11:11.658457 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:11:11.822102 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:11:11.822658 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:11:12.295848 (dockerd)[1751]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:11:12.295934 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:11:12.753759 dockerd[1751]: time="2025-08-13T07:11:12.753650769Z" level=info msg="Starting up" Aug 13 07:11:12.915884 dockerd[1751]: time="2025-08-13T07:11:12.915555065Z" level=info msg="Loading containers: start." Aug 13 07:11:13.081216 kernel: Initializing XFRM netlink socket Aug 13 07:11:13.191777 systemd-networkd[1375]: docker0: Link UP Aug 13 07:11:13.220429 dockerd[1751]: time="2025-08-13T07:11:13.220369591Z" level=info msg="Loading containers: done." Aug 13 07:11:13.247277 dockerd[1751]: time="2025-08-13T07:11:13.247207083Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:11:13.247508 dockerd[1751]: time="2025-08-13T07:11:13.247354670Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:11:13.247574 dockerd[1751]: time="2025-08-13T07:11:13.247533657Z" level=info msg="Daemon has completed initialization" Aug 13 07:11:13.292770 dockerd[1751]: time="2025-08-13T07:11:13.292142862Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:11:13.292437 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:11:14.226785 containerd[1466]: time="2025-08-13T07:11:14.226637805Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 07:11:14.752805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1158905044.mount: Deactivated successfully. Aug 13 07:11:16.600644 containerd[1466]: time="2025-08-13T07:11:16.600561198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:16.602347 containerd[1466]: time="2025-08-13T07:11:16.602285161Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=30084865" Aug 13 07:11:16.603887 containerd[1466]: time="2025-08-13T07:11:16.603798986Z" level=info msg="ImageCreate event name:\"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:16.608196 containerd[1466]: time="2025-08-13T07:11:16.608099614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:16.610078 containerd[1466]: time="2025-08-13T07:11:16.609657806Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"30075037\" in 2.382956121s" Aug 13 07:11:16.610078 containerd[1466]: time="2025-08-13T07:11:16.609714198Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 07:11:16.611018 containerd[1466]: time="2025-08-13T07:11:16.610986935Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 07:11:18.164701 containerd[1466]: time="2025-08-13T07:11:18.164620893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:18.167024 containerd[1466]: time="2025-08-13T07:11:18.166949024Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=26021295" Aug 13 07:11:18.169067 containerd[1466]: time="2025-08-13T07:11:18.168968258Z" level=info msg="ImageCreate event name:\"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:18.176732 containerd[1466]: time="2025-08-13T07:11:18.176636865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:18.179612 containerd[1466]: time="2025-08-13T07:11:18.178689543Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"27646922\" in 1.567655888s" Aug 13 07:11:18.179612 containerd[1466]: time="2025-08-13T07:11:18.178764724Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 07:11:18.180278 containerd[1466]: time="2025-08-13T07:11:18.180237638Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 07:11:19.575544 containerd[1466]: time="2025-08-13T07:11:19.575463459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:19.577280 containerd[1466]: time="2025-08-13T07:11:19.577207227Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=20156929" Aug 13 07:11:19.578523 containerd[1466]: time="2025-08-13T07:11:19.578433937Z" level=info msg="ImageCreate event name:\"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:19.582512 containerd[1466]: time="2025-08-13T07:11:19.582431566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:19.583966 containerd[1466]: time="2025-08-13T07:11:19.583907977Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"21782592\" in 1.403598926s" Aug 13 07:11:19.584088 containerd[1466]: time="2025-08-13T07:11:19.583968701Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 07:11:19.585029 containerd[1466]: time="2025-08-13T07:11:19.584621050Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 07:11:19.931349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:11:19.942752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:11:20.299601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:11:20.311836 (kubelet)[1959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:11:20.372642 kubelet[1959]: E0813 07:11:20.372558 1959 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:11:20.375763 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:11:20.376039 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:11:21.234454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3653828909.mount: Deactivated successfully. Aug 13 07:11:21.938020 containerd[1466]: time="2025-08-13T07:11:21.937948908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:21.939639 containerd[1466]: time="2025-08-13T07:11:21.939541051Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31894561" Aug 13 07:11:21.941520 containerd[1466]: time="2025-08-13T07:11:21.941423248Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:21.945961 containerd[1466]: time="2025-08-13T07:11:21.945869037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:21.947344 containerd[1466]: time="2025-08-13T07:11:21.947104034Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 2.362435947s" Aug 13 07:11:21.947344 containerd[1466]: time="2025-08-13T07:11:21.947164600Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 07:11:21.947917 containerd[1466]: time="2025-08-13T07:11:21.947864371Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 07:11:22.424724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559275470.mount: Deactivated successfully. Aug 13 07:11:23.931568 containerd[1466]: time="2025-08-13T07:11:23.931479171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:23.933347 containerd[1466]: time="2025-08-13T07:11:23.933282134Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20948880" Aug 13 07:11:23.934307 containerd[1466]: time="2025-08-13T07:11:23.934226208Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:23.938309 containerd[1466]: time="2025-08-13T07:11:23.938208413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:23.940126 containerd[1466]: time="2025-08-13T07:11:23.939906196Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.991783578s" Aug 13 07:11:23.940126 containerd[1466]: time="2025-08-13T07:11:23.939978863Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 07:11:23.940959 containerd[1466]: time="2025-08-13T07:11:23.940870368Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:11:24.367121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021502861.mount: Deactivated successfully. Aug 13 07:11:24.373767 containerd[1466]: time="2025-08-13T07:11:24.373684785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:24.374922 containerd[1466]: time="2025-08-13T07:11:24.374824198Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Aug 13 07:11:24.376427 containerd[1466]: time="2025-08-13T07:11:24.376360556Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:24.379559 containerd[1466]: time="2025-08-13T07:11:24.379520521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:24.381002 containerd[1466]: time="2025-08-13T07:11:24.380767197Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 439.847087ms" Aug 13 07:11:24.381002 containerd[1466]: time="2025-08-13T07:11:24.380815175Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:11:24.381629 containerd[1466]: time="2025-08-13T07:11:24.381401205Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 07:11:24.870660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1576070475.mount: Deactivated successfully. Aug 13 07:11:25.635343 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 07:11:27.139316 containerd[1466]: time="2025-08-13T07:11:27.139240451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:27.141328 containerd[1466]: time="2025-08-13T07:11:27.141233879Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58251906" Aug 13 07:11:27.142791 containerd[1466]: time="2025-08-13T07:11:27.142697921Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:27.149544 containerd[1466]: time="2025-08-13T07:11:27.149450787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:27.151371 containerd[1466]: time="2025-08-13T07:11:27.151122069Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.769679127s" Aug 13 07:11:27.151371 containerd[1466]: time="2025-08-13T07:11:27.151209490Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 07:11:30.431548 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 07:11:30.439575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:11:31.074482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:11:31.086061 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:11:31.163029 kubelet[2117]: E0813 07:11:31.162963 2117 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:11:31.166980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:11:31.168505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:11:32.212466 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:11:32.220617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:11:32.266230 systemd[1]: Reloading requested from client PID 2132 ('systemctl') (unit session-9.scope)... Aug 13 07:11:32.266256 systemd[1]: Reloading... Aug 13 07:11:32.420286 zram_generator::config[2171]: No configuration found. Aug 13 07:11:32.595228 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:11:32.715109 systemd[1]: Reloading finished in 448 ms. Aug 13 07:11:32.785546 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:11:32.785694 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:11:32.786138 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:11:32.793746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:11:33.056376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:11:33.072895 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:11:33.129885 kubelet[2223]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:11:33.129885 kubelet[2223]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:11:33.129885 kubelet[2223]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:11:33.130488 kubelet[2223]: I0813 07:11:33.130010 2223 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:11:33.827246 kubelet[2223]: I0813 07:11:33.827144 2223 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 07:11:33.827246 kubelet[2223]: I0813 07:11:33.827227 2223 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:11:33.827642 kubelet[2223]: I0813 07:11:33.827601 2223 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 07:11:33.880328 kubelet[2223]: E0813 07:11:33.880229 2223 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.53:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 07:11:33.882203 kubelet[2223]: I0813 07:11:33.880816 2223 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:11:33.892259 kubelet[2223]: E0813 07:11:33.892217 2223 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:11:33.892259 kubelet[2223]: I0813 07:11:33.892262 2223 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:11:33.897760 kubelet[2223]: I0813 07:11:33.897703 2223 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:11:33.898155 kubelet[2223]: I0813 07:11:33.898110 2223 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:11:33.898475 kubelet[2223]: I0813 07:11:33.898146 2223 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:11:33.898687 kubelet[2223]: I0813 07:11:33.898478 2223 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:11:33.898687 kubelet[2223]: I0813 07:11:33.898498 2223 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 07:11:33.898687 kubelet[2223]: I0813 07:11:33.898680 2223 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:11:33.904542 kubelet[2223]: I0813 07:11:33.904132 2223 kubelet.go:480] "Attempting to sync node with API server" Aug 13 07:11:33.904542 kubelet[2223]: I0813 07:11:33.904231 2223 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:11:33.904542 kubelet[2223]: I0813 07:11:33.904283 2223 kubelet.go:386] "Adding apiserver pod source" Aug 13 07:11:33.904542 kubelet[2223]: I0813 07:11:33.904310 2223 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:11:33.920010 kubelet[2223]: E0813 07:11:33.918852 2223 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 07:11:33.920010 kubelet[2223]: I0813 07:11:33.919018 2223 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:11:33.920010 kubelet[2223]: I0813 07:11:33.919946 2223 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 07:11:33.922196 kubelet[2223]: W0813 07:11:33.921415 2223 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:11:33.929539 kubelet[2223]: E0813 07:11:33.929486 2223 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 07:11:33.939983 kubelet[2223]: I0813 07:11:33.939891 2223 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:11:33.940622 kubelet[2223]: I0813 07:11:33.940042 2223 server.go:1289] "Started kubelet" Aug 13 07:11:33.942470 kubelet[2223]: I0813 07:11:33.942112 2223 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:11:33.944190 kubelet[2223]: I0813 07:11:33.942937 2223 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:11:33.945490 kubelet[2223]: I0813 07:11:33.944374 2223 server.go:317] "Adding debug handlers to kubelet server" Aug 13 07:11:33.950196 kubelet[2223]: E0813 07:11:33.947830 2223 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal.185b420acc8f32e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal,UID:ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal,},FirstTimestamp:2025-08-13 07:11:33.939983078 +0000 UTC m=+0.860405571,LastTimestamp:2025-08-13 07:11:33.939983078 +0000 UTC m=+0.860405571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal,}" Aug 13 07:11:33.950460 kubelet[2223]: I0813 07:11:33.950293 2223 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:11:33.950662 kubelet[2223]: I0813 07:11:33.950634 2223 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:11:33.954309 kubelet[2223]: I0813 07:11:33.953205 2223 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:11:33.954309 kubelet[2223]: E0813 07:11:33.953547 2223 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" not found" Aug 13 07:11:33.956846 kubelet[2223]: I0813 07:11:33.956814 2223 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:11:33.957002 kubelet[2223]: I0813 07:11:33.956917 2223 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:11:33.957326 kubelet[2223]: I0813 07:11:33.957303 2223 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:11:33.958823 kubelet[2223]: E0813 07:11:33.958786 2223 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 07:11:33.958979 kubelet[2223]: E0813 07:11:33.958920 2223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.53:6443: connect: connection refused" interval="200ms" Aug 13 07:11:33.966236 kubelet[2223]: I0813 07:11:33.966204 2223 factory.go:223] Registration of the containerd container factory successfully Aug 13 07:11:33.966442 kubelet[2223]: I0813 07:11:33.966428 2223 factory.go:223] Registration of the systemd container factory successfully Aug 13 07:11:33.966725 kubelet[2223]: I0813 07:11:33.966702 2223 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:11:33.973378 kubelet[2223]: E0813 07:11:33.973205 2223 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:11:33.997721 kubelet[2223]: I0813 07:11:33.997626 2223 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 07:11:34.000387 kubelet[2223]: I0813 07:11:33.999996 2223 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 07:11:34.000387 kubelet[2223]: I0813 07:11:34.000029 2223 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 07:11:34.000387 kubelet[2223]: I0813 07:11:34.000061 2223 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:11:34.000387 kubelet[2223]: I0813 07:11:34.000074 2223 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 07:11:34.000387 kubelet[2223]: E0813 07:11:34.000136 2223 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:11:34.003337 kubelet[2223]: E0813 07:11:34.003072 2223 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 07:11:34.005693 kubelet[2223]: I0813 07:11:34.005660 2223 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:11:34.005693 kubelet[2223]: I0813 07:11:34.005690 2223 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:11:34.005879 kubelet[2223]: I0813 07:11:34.005715 2223 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:11:34.009554 kubelet[2223]: I0813 07:11:34.009504 2223 policy_none.go:49] "None policy: Start" Aug 13 07:11:34.009554 kubelet[2223]: I0813 07:11:34.009536 2223 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:11:34.009554 kubelet[2223]: I0813 07:11:34.009557 2223 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:11:34.018393 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:11:34.028419 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:11:34.033764 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:11:34.045903 kubelet[2223]: E0813 07:11:34.045618 2223 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 07:11:34.045903 kubelet[2223]: I0813 07:11:34.045915 2223 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:11:34.046406 kubelet[2223]: I0813 07:11:34.045937 2223 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:11:34.046406 kubelet[2223]: I0813 07:11:34.046299 2223 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:11:34.048457 kubelet[2223]: E0813 07:11:34.048426 2223 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:11:34.048687 kubelet[2223]: E0813 07:11:34.048501 2223 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" not found" Aug 13 07:11:34.126196 systemd[1]: Created slice kubepods-burstable-pod44f806ca55f710307c6fdc9278683e2c.slice - libcontainer container kubepods-burstable-pod44f806ca55f710307c6fdc9278683e2c.slice. Aug 13 07:11:34.143829 kubelet[2223]: E0813 07:11:34.143527 2223 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.147705 systemd[1]: Created slice kubepods-burstable-pod95734358ec3c82a45d601bd496cebb55.slice - libcontainer container kubepods-burstable-pod95734358ec3c82a45d601bd496cebb55.slice. Aug 13 07:11:34.151126 kubelet[2223]: I0813 07:11:34.151090 2223 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.151632 kubelet[2223]: E0813 07:11:34.151580 2223 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.53:6443/api/v1/nodes\": dial tcp 10.128.0.53:6443: connect: connection refused" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.155385 kubelet[2223]: E0813 07:11:34.155088 2223 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.158510 kubelet[2223]: I0813 07:11:34.158475 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e5d31cbc06f2829cf60810938e6285f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"3e5d31cbc06f2829cf60810938e6285f\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.158907 kubelet[2223]: I0813 07:11:34.158779 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44f806ca55f710307c6fdc9278683e2c-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"44f806ca55f710307c6fdc9278683e2c\") " pod="kube-system/kube-scheduler-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.158907 kubelet[2223]: I0813 07:11:34.158852 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95734358ec3c82a45d601bd496cebb55-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"95734358ec3c82a45d601bd496cebb55\") " pod="kube-system/kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.159222 kubelet[2223]: I0813 07:11:34.159000 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95734358ec3c82a45d601bd496cebb55-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"95734358ec3c82a45d601bd496cebb55\") " pod="kube-system/kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.159135 systemd[1]: Created slice kubepods-burstable-pod3e5d31cbc06f2829cf60810938e6285f.slice - libcontainer container kubepods-burstable-pod3e5d31cbc06f2829cf60810938e6285f.slice. Aug 13 07:11:34.160029 kubelet[2223]: I0813 07:11:34.159041 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95734358ec3c82a45d601bd496cebb55-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"95734358ec3c82a45d601bd496cebb55\") " pod="kube-system/kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.160029 kubelet[2223]: I0813 07:11:34.159519 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e5d31cbc06f2829cf60810938e6285f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"3e5d31cbc06f2829cf60810938e6285f\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.160029 kubelet[2223]: I0813 07:11:34.159570 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3e5d31cbc06f2829cf60810938e6285f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"3e5d31cbc06f2829cf60810938e6285f\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.160029 kubelet[2223]: I0813 07:11:34.159596 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e5d31cbc06f2829cf60810938e6285f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"3e5d31cbc06f2829cf60810938e6285f\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.160326 kubelet[2223]: I0813 07:11:34.159645 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e5d31cbc06f2829cf60810938e6285f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"3e5d31cbc06f2829cf60810938e6285f\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.160326 kubelet[2223]: E0813 07:11:34.159878 2223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.53:6443: connect: connection refused" interval="400ms" Aug 13 07:11:34.163077 kubelet[2223]: E0813 07:11:34.163043 2223 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.359509 kubelet[2223]: I0813 07:11:34.359459 2223 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.359921 kubelet[2223]: E0813 07:11:34.359885 2223 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.53:6443/api/v1/nodes\": dial tcp 10.128.0.53:6443: connect: connection refused" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.445432 containerd[1466]: time="2025-08-13T07:11:34.445339220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal,Uid:44f806ca55f710307c6fdc9278683e2c,Namespace:kube-system,Attempt:0,}" Aug 13 07:11:34.456344 containerd[1466]: time="2025-08-13T07:11:34.456287717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal,Uid:95734358ec3c82a45d601bd496cebb55,Namespace:kube-system,Attempt:0,}" Aug 13 07:11:34.469315 containerd[1466]: time="2025-08-13T07:11:34.469253227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal,Uid:3e5d31cbc06f2829cf60810938e6285f,Namespace:kube-system,Attempt:0,}" Aug 13 07:11:34.561520 kubelet[2223]: E0813 07:11:34.561450 2223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.53:6443: connect: connection refused" interval="800ms" Aug 13 07:11:34.767888 kubelet[2223]: I0813 07:11:34.767711 2223 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.768372 kubelet[2223]: E0813 07:11:34.768307 2223 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.53:6443/api/v1/nodes\": dial tcp 10.128.0.53:6443: connect: connection refused" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:34.841795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790777429.mount: Deactivated successfully. Aug 13 07:11:34.853301 containerd[1466]: time="2025-08-13T07:11:34.853198813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:11:34.854749 containerd[1466]: time="2025-08-13T07:11:34.854680046Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Aug 13 07:11:34.856611 containerd[1466]: time="2025-08-13T07:11:34.856561478Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:11:34.858463 containerd[1466]: time="2025-08-13T07:11:34.858407921Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:11:34.859646 containerd[1466]: time="2025-08-13T07:11:34.859592297Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:11:34.860471 containerd[1466]: time="2025-08-13T07:11:34.860163040Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:11:34.862455 containerd[1466]: time="2025-08-13T07:11:34.862338852Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:11:34.864143 containerd[1466]: time="2025-08-13T07:11:34.864007510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:11:34.866937 containerd[1466]: time="2025-08-13T07:11:34.866889999Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 397.537116ms" Aug 13 07:11:34.869654 containerd[1466]: time="2025-08-13T07:11:34.869427774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 413.034982ms" Aug 13 07:11:34.875535 containerd[1466]: time="2025-08-13T07:11:34.875467782Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 430.002938ms" Aug 13 07:11:35.034010 kubelet[2223]: E0813 07:11:35.033818 2223 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 07:11:35.115286 containerd[1466]: time="2025-08-13T07:11:35.115093339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:11:35.115833 containerd[1466]: time="2025-08-13T07:11:35.115587752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:11:35.115833 containerd[1466]: time="2025-08-13T07:11:35.115722794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:11:35.116575 containerd[1466]: time="2025-08-13T07:11:35.116407231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:11:35.120985 containerd[1466]: time="2025-08-13T07:11:35.118589419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:11:35.121773 containerd[1466]: time="2025-08-13T07:11:35.121356087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:11:35.121773 containerd[1466]: time="2025-08-13T07:11:35.121400460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:11:35.121773 containerd[1466]: time="2025-08-13T07:11:35.121539876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:11:35.131204 containerd[1466]: time="2025-08-13T07:11:35.130727355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:11:35.131204 containerd[1466]: time="2025-08-13T07:11:35.130837886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:11:35.131204 containerd[1466]: time="2025-08-13T07:11:35.130864528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:11:35.131204 containerd[1466]: time="2025-08-13T07:11:35.131014520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:11:35.175758 systemd[1]: Started cri-containerd-d57057841995ca066a0c292855f14b479112e586b2daa16269d51110c5fba0d6.scope - libcontainer container d57057841995ca066a0c292855f14b479112e586b2daa16269d51110c5fba0d6. Aug 13 07:11:35.199501 systemd[1]: Started cri-containerd-0505e0fbb7d23be79734f5c9168506675a828daca95eef615b173e197daf19dc.scope - libcontainer container 0505e0fbb7d23be79734f5c9168506675a828daca95eef615b173e197daf19dc. Aug 13 07:11:35.203938 systemd[1]: Started cri-containerd-d099aeec3cafbcb6510a26e80fc23c9a52de80ec40718a99be888c2dea9406aa.scope - libcontainer container d099aeec3cafbcb6510a26e80fc23c9a52de80ec40718a99be888c2dea9406aa. Aug 13 07:11:35.219139 kubelet[2223]: E0813 07:11:35.219070 2223 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 07:11:35.236343 kubelet[2223]: E0813 07:11:35.236211 2223 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 07:11:35.312123 containerd[1466]: time="2025-08-13T07:11:35.311927493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal,Uid:44f806ca55f710307c6fdc9278683e2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d57057841995ca066a0c292855f14b479112e586b2daa16269d51110c5fba0d6\"" Aug 13 07:11:35.320457 containerd[1466]: time="2025-08-13T07:11:35.320392465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal,Uid:3e5d31cbc06f2829cf60810938e6285f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d099aeec3cafbcb6510a26e80fc23c9a52de80ec40718a99be888c2dea9406aa\"" Aug 13 07:11:35.326736 kubelet[2223]: E0813 07:11:35.325837 2223 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-21291" Aug 13 07:11:35.326736 kubelet[2223]: E0813 07:11:35.325920 2223 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flat" Aug 13 07:11:35.334496 containerd[1466]: time="2025-08-13T07:11:35.334413913Z" level=info msg="CreateContainer within sandbox \"d57057841995ca066a0c292855f14b479112e586b2daa16269d51110c5fba0d6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:11:35.336655 containerd[1466]: time="2025-08-13T07:11:35.336603470Z" level=info msg="CreateContainer within sandbox \"d099aeec3cafbcb6510a26e80fc23c9a52de80ec40718a99be888c2dea9406aa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:11:35.358393 containerd[1466]: time="2025-08-13T07:11:35.358246874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal,Uid:95734358ec3c82a45d601bd496cebb55,Namespace:kube-system,Attempt:0,} returns sandbox id \"0505e0fbb7d23be79734f5c9168506675a828daca95eef615b173e197daf19dc\"" Aug 13 07:11:35.361144 kubelet[2223]: E0813 07:11:35.361083 2223 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-21291" Aug 13 07:11:35.362196 kubelet[2223]: E0813 07:11:35.362140 2223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.53:6443: connect: connection refused" interval="1.6s" Aug 13 07:11:35.365907 containerd[1466]: time="2025-08-13T07:11:35.365854152Z" level=info msg="CreateContainer within sandbox \"0505e0fbb7d23be79734f5c9168506675a828daca95eef615b173e197daf19dc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:11:35.368592 containerd[1466]: time="2025-08-13T07:11:35.368517723Z" level=info msg="CreateContainer within sandbox \"d57057841995ca066a0c292855f14b479112e586b2daa16269d51110c5fba0d6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1b510db79d7197a1b9f54927779546665f566b9e70302d9bec87af60e2d39217\"" Aug 13 07:11:35.369386 containerd[1466]: time="2025-08-13T07:11:35.369208674Z" level=info msg="CreateContainer within sandbox \"d099aeec3cafbcb6510a26e80fc23c9a52de80ec40718a99be888c2dea9406aa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9a4cac7fa77a900e7d5e5dee7c4c21c7e45486f3e56729cd3a323ffacfb66f4d\"" Aug 13 07:11:35.370212 containerd[1466]: time="2025-08-13T07:11:35.369642929Z" level=info msg="StartContainer for \"1b510db79d7197a1b9f54927779546665f566b9e70302d9bec87af60e2d39217\"" Aug 13 07:11:35.370212 containerd[1466]: time="2025-08-13T07:11:35.370062751Z" level=info msg="StartContainer for \"9a4cac7fa77a900e7d5e5dee7c4c21c7e45486f3e56729cd3a323ffacfb66f4d\"" Aug 13 07:11:35.407062 containerd[1466]: time="2025-08-13T07:11:35.406989294Z" level=info msg="CreateContainer within sandbox \"0505e0fbb7d23be79734f5c9168506675a828daca95eef615b173e197daf19dc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"713bd31231bff5c69f37c2d75bec2a1523920a234bd950999a97c60c1a7f9d18\"" Aug 13 07:11:35.408105 containerd[1466]: time="2025-08-13T07:11:35.408057310Z" level=info msg="StartContainer for \"713bd31231bff5c69f37c2d75bec2a1523920a234bd950999a97c60c1a7f9d18\"" Aug 13 07:11:35.444457 systemd[1]: Started cri-containerd-1b510db79d7197a1b9f54927779546665f566b9e70302d9bec87af60e2d39217.scope - libcontainer container 1b510db79d7197a1b9f54927779546665f566b9e70302d9bec87af60e2d39217. Aug 13 07:11:35.447856 systemd[1]: Started cri-containerd-9a4cac7fa77a900e7d5e5dee7c4c21c7e45486f3e56729cd3a323ffacfb66f4d.scope - libcontainer container 9a4cac7fa77a900e7d5e5dee7c4c21c7e45486f3e56729cd3a323ffacfb66f4d. Aug 13 07:11:35.452352 kubelet[2223]: E0813 07:11:35.451088 2223 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 07:11:35.512737 systemd[1]: Started cri-containerd-713bd31231bff5c69f37c2d75bec2a1523920a234bd950999a97c60c1a7f9d18.scope - libcontainer container 713bd31231bff5c69f37c2d75bec2a1523920a234bd950999a97c60c1a7f9d18. Aug 13 07:11:35.565506 containerd[1466]: time="2025-08-13T07:11:35.563529926Z" level=info msg="StartContainer for \"1b510db79d7197a1b9f54927779546665f566b9e70302d9bec87af60e2d39217\" returns successfully" Aug 13 07:11:35.579039 kubelet[2223]: I0813 07:11:35.578737 2223 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:35.580565 kubelet[2223]: E0813 07:11:35.580407 2223 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.53:6443/api/v1/nodes\": dial tcp 10.128.0.53:6443: connect: connection refused" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:35.635298 containerd[1466]: time="2025-08-13T07:11:35.635048048Z" level=info msg="StartContainer for \"9a4cac7fa77a900e7d5e5dee7c4c21c7e45486f3e56729cd3a323ffacfb66f4d\" returns successfully" Aug 13 07:11:35.647810 containerd[1466]: time="2025-08-13T07:11:35.647581854Z" level=info msg="StartContainer for \"713bd31231bff5c69f37c2d75bec2a1523920a234bd950999a97c60c1a7f9d18\" returns successfully" Aug 13 07:11:36.018231 kubelet[2223]: E0813 07:11:36.018116 2223 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:36.018853 kubelet[2223]: E0813 07:11:36.018810 2223 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:36.028213 kubelet[2223]: E0813 07:11:36.027604 2223 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:37.033402 kubelet[2223]: E0813 07:11:37.033356 2223 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:37.035887 kubelet[2223]: E0813 07:11:37.034996 2223 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:37.191241 kubelet[2223]: I0813 07:11:37.188856 2223 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:38.847596 kubelet[2223]: E0813 07:11:38.847523 2223 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:38.915415 kubelet[2223]: E0813 07:11:38.915310 2223 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" not found" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:38.930210 kubelet[2223]: I0813 07:11:38.929116 2223 apiserver.go:52] "Watching apiserver" Aug 13 07:11:38.937676 kubelet[2223]: I0813 07:11:38.937614 2223 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:38.955654 kubelet[2223]: I0813 07:11:38.955355 2223 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:38.958313 kubelet[2223]: I0813 07:11:38.958248 2223 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:11:39.028618 kubelet[2223]: E0813 07:11:39.028544 2223 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:39.028618 kubelet[2223]: I0813 07:11:39.028614 2223 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:39.034067 kubelet[2223]: E0813 07:11:39.034023 2223 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:39.034281 kubelet[2223]: I0813 07:11:39.034080 2223 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:39.036407 kubelet[2223]: E0813 07:11:39.036363 2223 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:39.681302 update_engine[1442]: I20250813 07:11:39.680224 1442 update_attempter.cc:509] Updating boot flags... Aug 13 07:11:39.769226 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2514) Aug 13 07:11:39.917224 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2510) Aug 13 07:11:40.101207 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2510) Aug 13 07:11:40.107607 kubelet[2223]: I0813 07:11:40.106096 2223 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:40.123585 kubelet[2223]: I0813 07:11:40.121366 2223 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Aug 13 07:11:41.061030 systemd[1]: Reloading requested from client PID 2527 ('systemctl') (unit session-9.scope)... Aug 13 07:11:41.061055 systemd[1]: Reloading... Aug 13 07:11:41.238217 zram_generator::config[2567]: No configuration found. Aug 13 07:11:41.385144 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:11:41.511878 systemd[1]: Reloading finished in 450 ms. Aug 13 07:11:41.571262 kubelet[2223]: I0813 07:11:41.571124 2223 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:11:41.571302 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:11:41.595626 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:11:41.596019 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:11:41.596728 systemd[1]: kubelet.service: Consumed 1.441s CPU time, 136.0M memory peak, 0B memory swap peak. Aug 13 07:11:41.606620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:11:41.985482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:11:41.998879 (kubelet)[2615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:11:42.117392 kubelet[2615]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:11:42.119673 kubelet[2615]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:11:42.121214 kubelet[2615]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:11:42.121214 kubelet[2615]: I0813 07:11:42.119972 2615 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:11:42.136591 kubelet[2615]: I0813 07:11:42.136533 2615 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 07:11:42.136591 kubelet[2615]: I0813 07:11:42.136585 2615 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:11:42.137118 kubelet[2615]: I0813 07:11:42.137083 2615 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 07:11:42.139521 kubelet[2615]: I0813 07:11:42.139482 2615 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 07:11:42.145692 kubelet[2615]: I0813 07:11:42.144410 2615 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:11:42.164286 kubelet[2615]: E0813 07:11:42.164135 2615 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:11:42.164286 kubelet[2615]: I0813 07:11:42.164285 2615 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:11:42.173697 kubelet[2615]: I0813 07:11:42.173632 2615 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:11:42.174294 kubelet[2615]: I0813 07:11:42.174228 2615 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:11:42.175521 kubelet[2615]: I0813 07:11:42.174277 2615 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:11:42.175521 kubelet[2615]: I0813 07:11:42.174563 2615 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:11:42.175521 kubelet[2615]: I0813 07:11:42.174583 2615 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 07:11:42.175521 kubelet[2615]: I0813 07:11:42.174670 2615 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:11:42.175894 kubelet[2615]: I0813 07:11:42.175854 2615 kubelet.go:480] "Attempting to sync node with API server" Aug 13 07:11:42.175894 kubelet[2615]: I0813 07:11:42.175895 2615 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:11:42.176004 kubelet[2615]: I0813 07:11:42.175948 2615 kubelet.go:386] "Adding apiserver pod source" Aug 13 07:11:42.176055 kubelet[2615]: I0813 07:11:42.176019 2615 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:11:42.181717 kubelet[2615]: I0813 07:11:42.181057 2615 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:11:42.182004 kubelet[2615]: I0813 07:11:42.181904 2615 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 07:11:42.235489 kubelet[2615]: I0813 07:11:42.235418 2615 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:11:42.239296 kubelet[2615]: I0813 07:11:42.239082 2615 server.go:1289] "Started kubelet" Aug 13 07:11:42.240431 kubelet[2615]: I0813 07:11:42.240263 2615 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:11:42.240848 kubelet[2615]: I0813 07:11:42.240819 2615 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:11:42.241123 kubelet[2615]: I0813 07:11:42.240938 2615 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:11:42.249543 kubelet[2615]: I0813 07:11:42.249506 2615 server.go:317] "Adding debug handlers to kubelet server" Aug 13 07:11:42.256661 kubelet[2615]: I0813 07:11:42.254879 2615 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:11:42.258308 kubelet[2615]: I0813 07:11:42.257796 2615 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:11:42.265236 kubelet[2615]: I0813 07:11:42.265212 2615 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:11:42.267397 kubelet[2615]: I0813 07:11:42.267349 2615 factory.go:223] Registration of the systemd container factory successfully Aug 13 07:11:42.267556 kubelet[2615]: I0813 07:11:42.267526 2615 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:11:42.268862 kubelet[2615]: I0813 07:11:42.268807 2615 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:11:42.270803 kubelet[2615]: E0813 07:11:42.270758 2615 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:11:42.276850 kubelet[2615]: I0813 07:11:42.275877 2615 factory.go:223] Registration of the containerd container factory successfully Aug 13 07:11:42.284196 kubelet[2615]: I0813 07:11:42.281608 2615 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:11:42.320677 kubelet[2615]: I0813 07:11:42.319874 2615 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 07:11:42.322247 kubelet[2615]: I0813 07:11:42.322139 2615 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 07:11:42.322415 kubelet[2615]: I0813 07:11:42.322321 2615 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 07:11:42.322495 kubelet[2615]: I0813 07:11:42.322481 2615 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:11:42.322555 kubelet[2615]: I0813 07:11:42.322498 2615 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 07:11:42.323950 kubelet[2615]: E0813 07:11:42.323907 2615 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:11:42.401265 kubelet[2615]: I0813 07:11:42.400477 2615 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:11:42.401265 kubelet[2615]: I0813 07:11:42.400516 2615 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:11:42.401265 kubelet[2615]: I0813 07:11:42.400550 2615 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:11:42.401265 kubelet[2615]: I0813 07:11:42.400805 2615 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:11:42.401265 kubelet[2615]: I0813 07:11:42.400821 2615 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:11:42.401265 kubelet[2615]: I0813 07:11:42.400851 2615 policy_none.go:49] "None policy: Start" Aug 13 07:11:42.401265 kubelet[2615]: I0813 07:11:42.400870 2615 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:11:42.401265 kubelet[2615]: I0813 07:11:42.400889 2615 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:11:42.401265 kubelet[2615]: I0813 07:11:42.401050 2615 state_mem.go:75] "Updated machine memory state" Aug 13 07:11:42.409556 kubelet[2615]: E0813 07:11:42.408608 2615 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 07:11:42.409556 kubelet[2615]: I0813 07:11:42.408867 2615 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:11:42.409556 kubelet[2615]: I0813 07:11:42.408886 2615 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:11:42.409556 kubelet[2615]: I0813 07:11:42.409325 2615 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:11:42.411570 kubelet[2615]: E0813 07:11:42.411532 2615 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:11:42.426634 kubelet[2615]: I0813 07:11:42.425803 2615 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:42.429932 kubelet[2615]: I0813 07:11:42.428706 2615 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:42.429932 kubelet[2615]: I0813 07:11:42.429672 2615 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:42.450612 kubelet[2615]: I0813 07:11:42.450095 2615 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Aug 13 07:11:42.452524 kubelet[2615]: I0813 07:11:42.452203 2615 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Aug 13 07:11:42.452997 kubelet[2615]: I0813 07:11:42.452831 2615 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Aug 13 07:11:42.452997 kubelet[2615]: E0813 07:11:42.452915 2615 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:42.482310 kubelet[2615]: I0813 07:11:42.482205 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e5d31cbc06f2829cf60810938e6285f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"3e5d31cbc06f2829cf60810938e6285f\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:42.482753 kubelet[2615]: I0813 07:11:42.482359 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e5d31cbc06f2829cf60810938e6285f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"3e5d31cbc06f2829cf60810938e6285f\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:42.482753 kubelet[2615]: I0813 07:11:42.482411 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e5d31cbc06f2829cf60810938e6285f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"3e5d31cbc06f2829cf60810938e6285f\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:42.482753 kubelet[2615]: I0813 07:11:42.482448 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44f806ca55f710307c6fdc9278683e2c-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"44f806ca55f710307c6fdc9278683e2c\") " pod="kube-system/kube-scheduler-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:42.482753 kubelet[2615]: I0813 07:11:42.482497 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95734358ec3c82a45d601bd496cebb55-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"95734358ec3c82a45d601bd496cebb55\") " pod="kube-system/kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:42.483100 kubelet[2615]: I0813 07:11:42.482534 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95734358ec3c82a45d601bd496cebb55-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"95734358ec3c82a45d601bd496cebb55\") " pod="kube-system/kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:42.483100 kubelet[2615]: I0813 07:11:42.482566 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e5d31cbc06f2829cf60810938e6285f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"3e5d31cbc06f2829cf60810938e6285f\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:42.483100 kubelet[2615]: I0813 07:11:42.482592 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3e5d31cbc06f2829cf60810938e6285f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"3e5d31cbc06f2829cf60810938e6285f\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:42.483100 kubelet[2615]: I0813 07:11:42.482634 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95734358ec3c82a45d601bd496cebb55-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" (UID: \"95734358ec3c82a45d601bd496cebb55\") " pod="kube-system/kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:42.532414 kubelet[2615]: I0813 07:11:42.530388 2615 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:42.547274 kubelet[2615]: I0813 07:11:42.546056 2615 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:42.547274 kubelet[2615]: I0813 07:11:42.546254 2615 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:11:43.177278 kubelet[2615]: I0813 07:11:43.177089 2615 apiserver.go:52] "Watching apiserver" Aug 13 07:11:43.269316 kubelet[2615]: I0813 07:11:43.269225 2615 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:11:43.393207 kubelet[2615]: I0813 07:11:43.391664 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" podStartSLOduration=3.391642187 podStartE2EDuration="3.391642187s" podCreationTimestamp="2025-08-13 07:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:11:43.390599491 +0000 UTC m=+1.381503108" watchObservedRunningTime="2025-08-13 07:11:43.391642187 +0000 UTC m=+1.382545800" Aug 13 07:11:43.421871 kubelet[2615]: I0813 07:11:43.421782 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" podStartSLOduration=1.421753373 podStartE2EDuration="1.421753373s" podCreationTimestamp="2025-08-13 07:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:11:43.407211936 +0000 UTC m=+1.398115560" watchObservedRunningTime="2025-08-13 07:11:43.421753373 +0000 UTC m=+1.412656991" Aug 13 07:11:43.441526 kubelet[2615]: I0813 07:11:43.441306 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" podStartSLOduration=1.441278598 podStartE2EDuration="1.441278598s" podCreationTimestamp="2025-08-13 07:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:11:43.422810883 +0000 UTC m=+1.413714501" watchObservedRunningTime="2025-08-13 07:11:43.441278598 +0000 UTC m=+1.432182221" Aug 13 07:11:46.810273 kubelet[2615]: I0813 07:11:46.810209 2615 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:11:46.810883 containerd[1466]: time="2025-08-13T07:11:46.810739930Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:11:46.811345 kubelet[2615]: I0813 07:11:46.811046 2615 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:11:47.923907 systemd[1]: Created slice kubepods-besteffort-pod66706539_cadd_4afd_97bf_d3e0a7ba29b5.slice - libcontainer container kubepods-besteffort-pod66706539_cadd_4afd_97bf_d3e0a7ba29b5.slice. Aug 13 07:11:48.028111 kubelet[2615]: I0813 07:11:48.027824 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66706539-cadd-4afd-97bf-d3e0a7ba29b5-kube-proxy\") pod \"kube-proxy-gsjtg\" (UID: \"66706539-cadd-4afd-97bf-d3e0a7ba29b5\") " pod="kube-system/kube-proxy-gsjtg" Aug 13 07:11:48.028111 kubelet[2615]: I0813 07:11:48.027906 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66706539-cadd-4afd-97bf-d3e0a7ba29b5-lib-modules\") pod \"kube-proxy-gsjtg\" (UID: \"66706539-cadd-4afd-97bf-d3e0a7ba29b5\") " pod="kube-system/kube-proxy-gsjtg" Aug 13 07:11:48.028111 kubelet[2615]: I0813 07:11:48.027942 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66706539-cadd-4afd-97bf-d3e0a7ba29b5-xtables-lock\") pod \"kube-proxy-gsjtg\" (UID: \"66706539-cadd-4afd-97bf-d3e0a7ba29b5\") " pod="kube-system/kube-proxy-gsjtg" Aug 13 07:11:48.028111 kubelet[2615]: I0813 07:11:48.027975 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zv5m\" (UniqueName: \"kubernetes.io/projected/66706539-cadd-4afd-97bf-d3e0a7ba29b5-kube-api-access-5zv5m\") pod \"kube-proxy-gsjtg\" (UID: \"66706539-cadd-4afd-97bf-d3e0a7ba29b5\") " pod="kube-system/kube-proxy-gsjtg" Aug 13 07:11:48.069473 systemd[1]: Created slice kubepods-besteffort-pod8f0bd5f3_fd52_4094_b7f4_a8d1e1e14602.slice - libcontainer container kubepods-besteffort-pod8f0bd5f3_fd52_4094_b7f4_a8d1e1e14602.slice. Aug 13 07:11:48.130201 kubelet[2615]: I0813 07:11:48.129148 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f0bd5f3-fd52-4094-b7f4-a8d1e1e14602-var-lib-calico\") pod \"tigera-operator-747864d56d-blrsh\" (UID: \"8f0bd5f3-fd52-4094-b7f4-a8d1e1e14602\") " pod="tigera-operator/tigera-operator-747864d56d-blrsh" Aug 13 07:11:48.130201 kubelet[2615]: I0813 07:11:48.129268 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bvnn\" (UniqueName: \"kubernetes.io/projected/8f0bd5f3-fd52-4094-b7f4-a8d1e1e14602-kube-api-access-8bvnn\") pod \"tigera-operator-747864d56d-blrsh\" (UID: \"8f0bd5f3-fd52-4094-b7f4-a8d1e1e14602\") " pod="tigera-operator/tigera-operator-747864d56d-blrsh" Aug 13 07:11:48.248853 containerd[1466]: time="2025-08-13T07:11:48.248129778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gsjtg,Uid:66706539-cadd-4afd-97bf-d3e0a7ba29b5,Namespace:kube-system,Attempt:0,}" Aug 13 07:11:48.284694 containerd[1466]: time="2025-08-13T07:11:48.284419572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:11:48.284694 containerd[1466]: time="2025-08-13T07:11:48.284505881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:11:48.284694 containerd[1466]: time="2025-08-13T07:11:48.284532317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:11:48.285145 containerd[1466]: time="2025-08-13T07:11:48.284886063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:11:48.318405 systemd[1]: Started cri-containerd-83031fa9badfae2e2095b65507224559714a3bc1a311d5dfee23b80509e3ff27.scope - libcontainer container 83031fa9badfae2e2095b65507224559714a3bc1a311d5dfee23b80509e3ff27. Aug 13 07:11:48.351823 containerd[1466]: time="2025-08-13T07:11:48.351748802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gsjtg,Uid:66706539-cadd-4afd-97bf-d3e0a7ba29b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"83031fa9badfae2e2095b65507224559714a3bc1a311d5dfee23b80509e3ff27\"" Aug 13 07:11:48.360960 containerd[1466]: time="2025-08-13T07:11:48.360847324Z" level=info msg="CreateContainer within sandbox \"83031fa9badfae2e2095b65507224559714a3bc1a311d5dfee23b80509e3ff27\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:11:48.378101 containerd[1466]: time="2025-08-13T07:11:48.378041127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-blrsh,Uid:8f0bd5f3-fd52-4094-b7f4-a8d1e1e14602,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:11:48.386964 containerd[1466]: time="2025-08-13T07:11:48.386235198Z" level=info msg="CreateContainer within sandbox \"83031fa9badfae2e2095b65507224559714a3bc1a311d5dfee23b80509e3ff27\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c4764b7ad892a62bf638c0ee974a5c51a3307b849d75e94473fd97ebf1531899\"" Aug 13 07:11:48.390320 containerd[1466]: time="2025-08-13T07:11:48.388437342Z" level=info msg="StartContainer for \"c4764b7ad892a62bf638c0ee974a5c51a3307b849d75e94473fd97ebf1531899\"" Aug 13 07:11:48.432444 systemd[1]: Started cri-containerd-c4764b7ad892a62bf638c0ee974a5c51a3307b849d75e94473fd97ebf1531899.scope - libcontainer container c4764b7ad892a62bf638c0ee974a5c51a3307b849d75e94473fd97ebf1531899. Aug 13 07:11:48.620357 containerd[1466]: time="2025-08-13T07:11:48.619463405Z" level=info msg="StartContainer for \"c4764b7ad892a62bf638c0ee974a5c51a3307b849d75e94473fd97ebf1531899\" returns successfully" Aug 13 07:11:48.741252 containerd[1466]: time="2025-08-13T07:11:48.740760659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:11:48.741252 containerd[1466]: time="2025-08-13T07:11:48.740937066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:11:48.741252 containerd[1466]: time="2025-08-13T07:11:48.740967153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:11:48.741252 containerd[1466]: time="2025-08-13T07:11:48.741143441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:11:48.765436 systemd[1]: Started cri-containerd-328d5e9bd07cc79f8ddda8592c0ffff77eac51a3456d143e8ca0e2911dcc2c5b.scope - libcontainer container 328d5e9bd07cc79f8ddda8592c0ffff77eac51a3456d143e8ca0e2911dcc2c5b. Aug 13 07:11:48.837061 containerd[1466]: time="2025-08-13T07:11:48.836896659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-blrsh,Uid:8f0bd5f3-fd52-4094-b7f4-a8d1e1e14602,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"328d5e9bd07cc79f8ddda8592c0ffff77eac51a3456d143e8ca0e2911dcc2c5b\"" Aug 13 07:11:48.842083 containerd[1466]: time="2025-08-13T07:11:48.841679560Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:11:49.410534 kubelet[2615]: I0813 07:11:49.409956 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gsjtg" podStartSLOduration=2.409932499 podStartE2EDuration="2.409932499s" podCreationTimestamp="2025-08-13 07:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:11:49.409614647 +0000 UTC m=+7.400518267" watchObservedRunningTime="2025-08-13 07:11:49.409932499 +0000 UTC m=+7.400836117" Aug 13 07:11:50.183856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515489926.mount: Deactivated successfully. Aug 13 07:11:51.130854 containerd[1466]: time="2025-08-13T07:11:51.130779105Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:51.132291 containerd[1466]: time="2025-08-13T07:11:51.132217728Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:11:51.134025 containerd[1466]: time="2025-08-13T07:11:51.133955600Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:51.137312 containerd[1466]: time="2025-08-13T07:11:51.137217756Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:51.139078 containerd[1466]: time="2025-08-13T07:11:51.138455997Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.296718745s" Aug 13 07:11:51.139078 containerd[1466]: time="2025-08-13T07:11:51.138513518Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:11:51.144240 containerd[1466]: time="2025-08-13T07:11:51.144048484Z" level=info msg="CreateContainer within sandbox \"328d5e9bd07cc79f8ddda8592c0ffff77eac51a3456d143e8ca0e2911dcc2c5b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:11:51.170418 containerd[1466]: time="2025-08-13T07:11:51.170286459Z" level=info msg="CreateContainer within sandbox \"328d5e9bd07cc79f8ddda8592c0ffff77eac51a3456d143e8ca0e2911dcc2c5b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a2c6607a9e0f79044a500aacdc0d33c28bf055ff769f8ab4319eb7bac55429cc\"" Aug 13 07:11:51.175068 containerd[1466]: time="2025-08-13T07:11:51.175022553Z" level=info msg="StartContainer for \"a2c6607a9e0f79044a500aacdc0d33c28bf055ff769f8ab4319eb7bac55429cc\"" Aug 13 07:11:51.229481 systemd[1]: Started cri-containerd-a2c6607a9e0f79044a500aacdc0d33c28bf055ff769f8ab4319eb7bac55429cc.scope - libcontainer container a2c6607a9e0f79044a500aacdc0d33c28bf055ff769f8ab4319eb7bac55429cc. Aug 13 07:11:51.268791 containerd[1466]: time="2025-08-13T07:11:51.268572275Z" level=info msg="StartContainer for \"a2c6607a9e0f79044a500aacdc0d33c28bf055ff769f8ab4319eb7bac55429cc\" returns successfully" Aug 13 07:11:51.460354 kubelet[2615]: I0813 07:11:51.460268 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-blrsh" podStartSLOduration=2.159819433 podStartE2EDuration="4.460242323s" podCreationTimestamp="2025-08-13 07:11:47 +0000 UTC" firstStartedPulling="2025-08-13 07:11:48.839384663 +0000 UTC m=+6.830288284" lastFinishedPulling="2025-08-13 07:11:51.139807579 +0000 UTC m=+9.130711174" observedRunningTime="2025-08-13 07:11:51.441395845 +0000 UTC m=+9.432299464" watchObservedRunningTime="2025-08-13 07:11:51.460242323 +0000 UTC m=+9.451145941" Aug 13 07:11:56.686520 sudo[1735]: pam_unix(sudo:session): session closed for user root Aug 13 07:11:56.733676 sshd[1732]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:56.745293 systemd[1]: sshd@8-10.128.0.53:22-139.178.68.195:45056.service: Deactivated successfully. Aug 13 07:11:56.752433 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:11:56.752725 systemd[1]: session-9.scope: Consumed 8.116s CPU time, 160.7M memory peak, 0B memory swap peak. Aug 13 07:11:56.757272 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:11:56.761542 systemd-logind[1439]: Removed session 9. Aug 13 07:12:03.157605 systemd[1]: Created slice kubepods-besteffort-pod9c2e353f_13c5_4558_8af9_6745f1bd0e4e.slice - libcontainer container kubepods-besteffort-pod9c2e353f_13c5_4558_8af9_6745f1bd0e4e.slice. Aug 13 07:12:03.237254 kubelet[2615]: I0813 07:12:03.237112 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4cfd\" (UniqueName: \"kubernetes.io/projected/9c2e353f-13c5-4558-8af9-6745f1bd0e4e-kube-api-access-r4cfd\") pod \"calico-typha-c57479c98-v2wn2\" (UID: \"9c2e353f-13c5-4558-8af9-6745f1bd0e4e\") " pod="calico-system/calico-typha-c57479c98-v2wn2" Aug 13 07:12:03.237913 kubelet[2615]: I0813 07:12:03.237291 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9c2e353f-13c5-4558-8af9-6745f1bd0e4e-typha-certs\") pod \"calico-typha-c57479c98-v2wn2\" (UID: \"9c2e353f-13c5-4558-8af9-6745f1bd0e4e\") " pod="calico-system/calico-typha-c57479c98-v2wn2" Aug 13 07:12:03.237913 kubelet[2615]: I0813 07:12:03.237333 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c2e353f-13c5-4558-8af9-6745f1bd0e4e-tigera-ca-bundle\") pod \"calico-typha-c57479c98-v2wn2\" (UID: \"9c2e353f-13c5-4558-8af9-6745f1bd0e4e\") " pod="calico-system/calico-typha-c57479c98-v2wn2" Aug 13 07:12:03.459041 systemd[1]: Created slice kubepods-besteffort-pod00e511a5_a59d_462a_86b9_d48b9887cc9f.slice - libcontainer container kubepods-besteffort-pod00e511a5_a59d_462a_86b9_d48b9887cc9f.slice. Aug 13 07:12:03.467165 containerd[1466]: time="2025-08-13T07:12:03.466708374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c57479c98-v2wn2,Uid:9c2e353f-13c5-4558-8af9-6745f1bd0e4e,Namespace:calico-system,Attempt:0,}" Aug 13 07:12:03.524461 containerd[1466]: time="2025-08-13T07:12:03.523243310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:03.525504 containerd[1466]: time="2025-08-13T07:12:03.525146338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:03.525804 containerd[1466]: time="2025-08-13T07:12:03.525575228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:03.526456 containerd[1466]: time="2025-08-13T07:12:03.525963309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:03.540828 kubelet[2615]: I0813 07:12:03.539408 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/00e511a5-a59d-462a-86b9-d48b9887cc9f-node-certs\") pod \"calico-node-t4wpz\" (UID: \"00e511a5-a59d-462a-86b9-d48b9887cc9f\") " pod="calico-system/calico-node-t4wpz" Aug 13 07:12:03.540828 kubelet[2615]: I0813 07:12:03.539463 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/00e511a5-a59d-462a-86b9-d48b9887cc9f-policysync\") pod \"calico-node-t4wpz\" (UID: \"00e511a5-a59d-462a-86b9-d48b9887cc9f\") " pod="calico-system/calico-node-t4wpz" Aug 13 07:12:03.540828 kubelet[2615]: I0813 07:12:03.539503 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00e511a5-a59d-462a-86b9-d48b9887cc9f-var-lib-calico\") pod \"calico-node-t4wpz\" (UID: \"00e511a5-a59d-462a-86b9-d48b9887cc9f\") " pod="calico-system/calico-node-t4wpz" Aug 13 07:12:03.540828 kubelet[2615]: I0813 07:12:03.539532 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00e511a5-a59d-462a-86b9-d48b9887cc9f-xtables-lock\") pod \"calico-node-t4wpz\" (UID: \"00e511a5-a59d-462a-86b9-d48b9887cc9f\") " pod="calico-system/calico-node-t4wpz" Aug 13 07:12:03.540828 kubelet[2615]: I0813 07:12:03.539561 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00e511a5-a59d-462a-86b9-d48b9887cc9f-lib-modules\") pod \"calico-node-t4wpz\" (UID: \"00e511a5-a59d-462a-86b9-d48b9887cc9f\") " pod="calico-system/calico-node-t4wpz" Aug 13 07:12:03.541288 kubelet[2615]: I0813 07:12:03.539585 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/00e511a5-a59d-462a-86b9-d48b9887cc9f-var-run-calico\") pod \"calico-node-t4wpz\" (UID: \"00e511a5-a59d-462a-86b9-d48b9887cc9f\") " pod="calico-system/calico-node-t4wpz" Aug 13 07:12:03.541288 kubelet[2615]: I0813 07:12:03.539618 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqqbh\" (UniqueName: \"kubernetes.io/projected/00e511a5-a59d-462a-86b9-d48b9887cc9f-kube-api-access-gqqbh\") pod \"calico-node-t4wpz\" (UID: \"00e511a5-a59d-462a-86b9-d48b9887cc9f\") " pod="calico-system/calico-node-t4wpz" Aug 13 07:12:03.541288 kubelet[2615]: I0813 07:12:03.539651 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/00e511a5-a59d-462a-86b9-d48b9887cc9f-cni-net-dir\") pod \"calico-node-t4wpz\" (UID: \"00e511a5-a59d-462a-86b9-d48b9887cc9f\") " pod="calico-system/calico-node-t4wpz" Aug 13 07:12:03.541288 kubelet[2615]: I0813 07:12:03.539676 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00e511a5-a59d-462a-86b9-d48b9887cc9f-tigera-ca-bundle\") pod \"calico-node-t4wpz\" (UID: \"00e511a5-a59d-462a-86b9-d48b9887cc9f\") " pod="calico-system/calico-node-t4wpz" Aug 13 07:12:03.541288 kubelet[2615]: I0813 07:12:03.539703 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/00e511a5-a59d-462a-86b9-d48b9887cc9f-cni-bin-dir\") pod \"calico-node-t4wpz\" (UID: \"00e511a5-a59d-462a-86b9-d48b9887cc9f\") " pod="calico-system/calico-node-t4wpz" Aug 13 07:12:03.541575 kubelet[2615]: I0813 07:12:03.539731 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/00e511a5-a59d-462a-86b9-d48b9887cc9f-cni-log-dir\") pod \"calico-node-t4wpz\" (UID: \"00e511a5-a59d-462a-86b9-d48b9887cc9f\") " pod="calico-system/calico-node-t4wpz" Aug 13 07:12:03.541575 kubelet[2615]: I0813 07:12:03.539763 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/00e511a5-a59d-462a-86b9-d48b9887cc9f-flexvol-driver-host\") pod \"calico-node-t4wpz\" (UID: \"00e511a5-a59d-462a-86b9-d48b9887cc9f\") " pod="calico-system/calico-node-t4wpz" Aug 13 07:12:03.581259 systemd[1]: Started cri-containerd-f4b5cac5f5489e775c54aa65e24d7a78011118dc3cd8aef783fe348344e06c79.scope - libcontainer container f4b5cac5f5489e775c54aa65e24d7a78011118dc3cd8aef783fe348344e06c79. Aug 13 07:12:03.649283 kubelet[2615]: E0813 07:12:03.648671 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.649283 kubelet[2615]: W0813 07:12:03.648724 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.649283 kubelet[2615]: E0813 07:12:03.648779 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.659400 kubelet[2615]: E0813 07:12:03.658028 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.659400 kubelet[2615]: W0813 07:12:03.658062 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.659400 kubelet[2615]: E0813 07:12:03.658112 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.689305 kubelet[2615]: E0813 07:12:03.689262 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.689305 kubelet[2615]: W0813 07:12:03.689298 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.689545 kubelet[2615]: E0813 07:12:03.689331 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.691622 containerd[1466]: time="2025-08-13T07:12:03.691567264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c57479c98-v2wn2,Uid:9c2e353f-13c5-4558-8af9-6745f1bd0e4e,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4b5cac5f5489e775c54aa65e24d7a78011118dc3cd8aef783fe348344e06c79\"" Aug 13 07:12:03.694662 containerd[1466]: time="2025-08-13T07:12:03.694622969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:12:03.725731 kubelet[2615]: E0813 07:12:03.725569 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-phmj8" podUID="68f698fa-855d-4911-b071-8b5d910e948d" Aug 13 07:12:03.769536 containerd[1466]: time="2025-08-13T07:12:03.769460944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t4wpz,Uid:00e511a5-a59d-462a-86b9-d48b9887cc9f,Namespace:calico-system,Attempt:0,}" Aug 13 07:12:03.813507 containerd[1466]: time="2025-08-13T07:12:03.812321589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:03.814007 containerd[1466]: time="2025-08-13T07:12:03.812828968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:03.814007 containerd[1466]: time="2025-08-13T07:12:03.812873854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:03.814007 containerd[1466]: time="2025-08-13T07:12:03.813032065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:03.819695 kubelet[2615]: E0813 07:12:03.819654 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.819695 kubelet[2615]: W0813 07:12:03.819695 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.819695 kubelet[2615]: E0813 07:12:03.819731 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.820510 kubelet[2615]: E0813 07:12:03.820106 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.820510 kubelet[2615]: W0813 07:12:03.820126 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.820510 kubelet[2615]: E0813 07:12:03.820148 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.823081 kubelet[2615]: E0813 07:12:03.821293 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.823081 kubelet[2615]: W0813 07:12:03.821311 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.823081 kubelet[2615]: E0813 07:12:03.821333 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.823081 kubelet[2615]: E0813 07:12:03.821761 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.823081 kubelet[2615]: W0813 07:12:03.821779 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.823081 kubelet[2615]: E0813 07:12:03.821800 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.823697 kubelet[2615]: E0813 07:12:03.823296 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.823697 kubelet[2615]: W0813 07:12:03.823314 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.823697 kubelet[2615]: E0813 07:12:03.823334 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.823697 kubelet[2615]: E0813 07:12:03.823686 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.823937 kubelet[2615]: W0813 07:12:03.823701 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.823937 kubelet[2615]: E0813 07:12:03.823719 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.825659 kubelet[2615]: E0813 07:12:03.825613 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.825659 kubelet[2615]: W0813 07:12:03.825636 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.825659 kubelet[2615]: E0813 07:12:03.825656 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.827449 kubelet[2615]: E0813 07:12:03.827412 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.827449 kubelet[2615]: W0813 07:12:03.827438 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.827449 kubelet[2615]: E0813 07:12:03.827461 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.827867 kubelet[2615]: E0813 07:12:03.827829 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.827867 kubelet[2615]: W0813 07:12:03.827853 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.828027 kubelet[2615]: E0813 07:12:03.827872 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.828315 kubelet[2615]: E0813 07:12:03.828235 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.828315 kubelet[2615]: W0813 07:12:03.828254 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.828315 kubelet[2615]: E0813 07:12:03.828271 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.829257 kubelet[2615]: E0813 07:12:03.828636 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.829257 kubelet[2615]: W0813 07:12:03.828650 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.829257 kubelet[2615]: E0813 07:12:03.828667 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.829887 kubelet[2615]: E0813 07:12:03.829862 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.829887 kubelet[2615]: W0813 07:12:03.829889 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.830026 kubelet[2615]: E0813 07:12:03.829907 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.830804 kubelet[2615]: E0813 07:12:03.830778 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.830804 kubelet[2615]: W0813 07:12:03.830801 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.831019 kubelet[2615]: E0813 07:12:03.830819 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.832790 kubelet[2615]: E0813 07:12:03.832763 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.832790 kubelet[2615]: W0813 07:12:03.832789 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.832962 kubelet[2615]: E0813 07:12:03.832807 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.835152 kubelet[2615]: E0813 07:12:03.835073 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.835152 kubelet[2615]: W0813 07:12:03.835096 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.835152 kubelet[2615]: E0813 07:12:03.835117 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.835767 kubelet[2615]: E0813 07:12:03.835617 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.835855 kubelet[2615]: W0813 07:12:03.835771 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.835855 kubelet[2615]: E0813 07:12:03.835793 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.836659 kubelet[2615]: E0813 07:12:03.836490 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.836659 kubelet[2615]: W0813 07:12:03.836511 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.836659 kubelet[2615]: E0813 07:12:03.836528 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.837458 kubelet[2615]: E0813 07:12:03.836999 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.837458 kubelet[2615]: W0813 07:12:03.837015 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.837458 kubelet[2615]: E0813 07:12:03.837059 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.838297 kubelet[2615]: E0813 07:12:03.837848 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.838297 kubelet[2615]: W0813 07:12:03.837868 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.838297 kubelet[2615]: E0813 07:12:03.837886 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.839569 kubelet[2615]: E0813 07:12:03.838846 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.839569 kubelet[2615]: W0813 07:12:03.838871 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.839569 kubelet[2615]: E0813 07:12:03.838889 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.848249 kubelet[2615]: E0813 07:12:03.848151 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.848503 kubelet[2615]: W0813 07:12:03.848465 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.848608 kubelet[2615]: E0813 07:12:03.848517 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.849601 kubelet[2615]: I0813 07:12:03.849561 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68f698fa-855d-4911-b071-8b5d910e948d-registration-dir\") pod \"csi-node-driver-phmj8\" (UID: \"68f698fa-855d-4911-b071-8b5d910e948d\") " pod="calico-system/csi-node-driver-phmj8" Aug 13 07:12:03.851288 kubelet[2615]: E0813 07:12:03.851232 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.851702 kubelet[2615]: W0813 07:12:03.851476 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.851702 kubelet[2615]: E0813 07:12:03.851511 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.852677 kubelet[2615]: E0813 07:12:03.852426 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.852677 kubelet[2615]: W0813 07:12:03.852451 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.852677 kubelet[2615]: E0813 07:12:03.852472 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.852677 kubelet[2615]: I0813 07:12:03.852529 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68f698fa-855d-4911-b071-8b5d910e948d-socket-dir\") pod \"csi-node-driver-phmj8\" (UID: \"68f698fa-855d-4911-b071-8b5d910e948d\") " pod="calico-system/csi-node-driver-phmj8" Aug 13 07:12:03.856287 kubelet[2615]: E0813 07:12:03.855246 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.856287 kubelet[2615]: W0813 07:12:03.855269 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.856287 kubelet[2615]: E0813 07:12:03.855291 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.856287 kubelet[2615]: E0813 07:12:03.855740 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.856287 kubelet[2615]: W0813 07:12:03.855760 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.856287 kubelet[2615]: E0813 07:12:03.855779 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.856287 kubelet[2615]: E0813 07:12:03.856217 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.856287 kubelet[2615]: W0813 07:12:03.856234 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.856287 kubelet[2615]: E0813 07:12:03.856251 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.857428 kubelet[2615]: E0813 07:12:03.857217 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.857428 kubelet[2615]: W0813 07:12:03.857236 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.857428 kubelet[2615]: E0813 07:12:03.857254 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.857428 kubelet[2615]: I0813 07:12:03.857296 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/68f698fa-855d-4911-b071-8b5d910e948d-varrun\") pod \"csi-node-driver-phmj8\" (UID: \"68f698fa-855d-4911-b071-8b5d910e948d\") " pod="calico-system/csi-node-driver-phmj8" Aug 13 07:12:03.859078 kubelet[2615]: E0813 07:12:03.858446 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.859078 kubelet[2615]: W0813 07:12:03.858468 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.859078 kubelet[2615]: E0813 07:12:03.858486 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.859078 kubelet[2615]: I0813 07:12:03.858538 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfgqk\" (UniqueName: \"kubernetes.io/projected/68f698fa-855d-4911-b071-8b5d910e948d-kube-api-access-mfgqk\") pod \"csi-node-driver-phmj8\" (UID: \"68f698fa-855d-4911-b071-8b5d910e948d\") " pod="calico-system/csi-node-driver-phmj8" Aug 13 07:12:03.859078 kubelet[2615]: E0813 07:12:03.858919 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.859078 kubelet[2615]: W0813 07:12:03.858949 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.859078 kubelet[2615]: E0813 07:12:03.858965 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.862577 kubelet[2615]: E0813 07:12:03.860299 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.862577 kubelet[2615]: W0813 07:12:03.860318 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.862577 kubelet[2615]: E0813 07:12:03.860336 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.862577 kubelet[2615]: E0813 07:12:03.862449 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.862577 kubelet[2615]: W0813 07:12:03.862464 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.862577 kubelet[2615]: E0813 07:12:03.862482 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.862577 kubelet[2615]: I0813 07:12:03.862536 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68f698fa-855d-4911-b071-8b5d910e948d-kubelet-dir\") pod \"csi-node-driver-phmj8\" (UID: \"68f698fa-855d-4911-b071-8b5d910e948d\") " pod="calico-system/csi-node-driver-phmj8" Aug 13 07:12:03.864195 kubelet[2615]: E0813 07:12:03.863467 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.864195 kubelet[2615]: W0813 07:12:03.863490 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.864195 kubelet[2615]: E0813 07:12:03.863523 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.866773 kubelet[2615]: E0813 07:12:03.865899 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.866773 kubelet[2615]: W0813 07:12:03.865933 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.866773 kubelet[2615]: E0813 07:12:03.865954 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.866773 kubelet[2615]: E0813 07:12:03.866362 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.866773 kubelet[2615]: W0813 07:12:03.866376 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.866773 kubelet[2615]: E0813 07:12:03.866393 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.866773 kubelet[2615]: E0813 07:12:03.866682 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.866773 kubelet[2615]: W0813 07:12:03.866696 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.866773 kubelet[2615]: E0813 07:12:03.866711 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.877005 systemd[1]: Started cri-containerd-85a0279e53b88db00dbdddf49285656c6aaca69c695ad7c64cda4678d1ebc446.scope - libcontainer container 85a0279e53b88db00dbdddf49285656c6aaca69c695ad7c64cda4678d1ebc446. Aug 13 07:12:03.939061 containerd[1466]: time="2025-08-13T07:12:03.938999032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t4wpz,Uid:00e511a5-a59d-462a-86b9-d48b9887cc9f,Namespace:calico-system,Attempt:0,} returns sandbox id \"85a0279e53b88db00dbdddf49285656c6aaca69c695ad7c64cda4678d1ebc446\"" Aug 13 07:12:03.964706 kubelet[2615]: E0813 07:12:03.964458 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.964706 kubelet[2615]: W0813 07:12:03.964494 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.964706 kubelet[2615]: E0813 07:12:03.964543 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.967526 kubelet[2615]: E0813 07:12:03.967138 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.967526 kubelet[2615]: W0813 07:12:03.967194 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.967526 kubelet[2615]: E0813 07:12:03.967249 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.968652 kubelet[2615]: E0813 07:12:03.968320 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.968652 kubelet[2615]: W0813 07:12:03.968344 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.968652 kubelet[2615]: E0813 07:12:03.968369 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.972214 kubelet[2615]: E0813 07:12:03.969383 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.972214 kubelet[2615]: W0813 07:12:03.969404 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.972214 kubelet[2615]: E0813 07:12:03.969451 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.972214 kubelet[2615]: E0813 07:12:03.971565 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.972214 kubelet[2615]: W0813 07:12:03.971587 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.972214 kubelet[2615]: E0813 07:12:03.971681 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.972941 kubelet[2615]: E0813 07:12:03.972722 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.972941 kubelet[2615]: W0813 07:12:03.972744 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.972941 kubelet[2615]: E0813 07:12:03.972770 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.973433 kubelet[2615]: E0813 07:12:03.973414 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.973618 kubelet[2615]: W0813 07:12:03.973593 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.973738 kubelet[2615]: E0813 07:12:03.973719 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.974345 kubelet[2615]: E0813 07:12:03.974325 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.974484 kubelet[2615]: W0813 07:12:03.974464 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.974586 kubelet[2615]: E0813 07:12:03.974569 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.975260 kubelet[2615]: E0813 07:12:03.975216 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.975404 kubelet[2615]: W0813 07:12:03.975383 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.975518 kubelet[2615]: E0813 07:12:03.975500 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.976789 kubelet[2615]: E0813 07:12:03.976350 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.976789 kubelet[2615]: W0813 07:12:03.976721 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.976789 kubelet[2615]: E0813 07:12:03.976750 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.980699 kubelet[2615]: E0813 07:12:03.979149 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.980699 kubelet[2615]: W0813 07:12:03.979217 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.980699 kubelet[2615]: E0813 07:12:03.979241 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.983750 kubelet[2615]: E0813 07:12:03.983162 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.983750 kubelet[2615]: W0813 07:12:03.983227 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.983750 kubelet[2615]: E0813 07:12:03.983255 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.986574 kubelet[2615]: E0813 07:12:03.986335 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.986574 kubelet[2615]: W0813 07:12:03.986366 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.986574 kubelet[2615]: E0813 07:12:03.986396 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.990501 kubelet[2615]: E0813 07:12:03.990469 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.990637 kubelet[2615]: W0813 07:12:03.990610 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.990753 kubelet[2615]: E0813 07:12:03.990732 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.993050 kubelet[2615]: E0813 07:12:03.991623 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.993669 kubelet[2615]: W0813 07:12:03.993520 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.993669 kubelet[2615]: E0813 07:12:03.993561 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.995993 kubelet[2615]: E0813 07:12:03.995777 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.996305 kubelet[2615]: W0813 07:12:03.996075 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.996650 kubelet[2615]: E0813 07:12:03.996113 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:03.998529 kubelet[2615]: E0813 07:12:03.998448 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:03.998529 kubelet[2615]: W0813 07:12:03.998479 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:03.999393 kubelet[2615]: E0813 07:12:03.998973 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:04.000507 kubelet[2615]: E0813 07:12:04.000326 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:04.000811 kubelet[2615]: W0813 07:12:04.000625 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:04.000811 kubelet[2615]: E0813 07:12:04.000724 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:04.008690 kubelet[2615]: E0813 07:12:04.008415 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:04.008690 kubelet[2615]: W0813 07:12:04.008556 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:04.009316 kubelet[2615]: E0813 07:12:04.008966 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:04.011354 kubelet[2615]: E0813 07:12:04.011136 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:04.011354 kubelet[2615]: W0813 07:12:04.011165 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:04.011354 kubelet[2615]: E0813 07:12:04.011302 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:04.012953 kubelet[2615]: E0813 07:12:04.012665 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:04.012953 kubelet[2615]: W0813 07:12:04.012793 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:04.012953 kubelet[2615]: E0813 07:12:04.012823 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:04.014870 kubelet[2615]: E0813 07:12:04.014040 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:04.016601 kubelet[2615]: W0813 07:12:04.016379 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:04.016601 kubelet[2615]: E0813 07:12:04.016415 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:04.018704 kubelet[2615]: E0813 07:12:04.018411 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:04.018704 kubelet[2615]: W0813 07:12:04.018564 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:04.018704 kubelet[2615]: E0813 07:12:04.018592 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:04.021601 kubelet[2615]: E0813 07:12:04.021314 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:04.021601 kubelet[2615]: W0813 07:12:04.021440 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:04.021601 kubelet[2615]: E0813 07:12:04.021488 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:04.025464 kubelet[2615]: E0813 07:12:04.025149 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:04.025464 kubelet[2615]: W0813 07:12:04.025199 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:04.025464 kubelet[2615]: E0813 07:12:04.025230 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:04.041732 kubelet[2615]: E0813 07:12:04.041622 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:04.041732 kubelet[2615]: W0813 07:12:04.041654 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:04.041732 kubelet[2615]: E0813 07:12:04.041685 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:04.982218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3574016585.mount: Deactivated successfully. Aug 13 07:12:05.325164 kubelet[2615]: E0813 07:12:05.324018 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-phmj8" podUID="68f698fa-855d-4911-b071-8b5d910e948d" Aug 13 07:12:05.992867 containerd[1466]: time="2025-08-13T07:12:05.992785127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:05.994465 containerd[1466]: time="2025-08-13T07:12:05.994399631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:12:05.996046 containerd[1466]: time="2025-08-13T07:12:05.995979210Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:05.999032 containerd[1466]: time="2025-08-13T07:12:05.998988836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:06.000142 containerd[1466]: time="2025-08-13T07:12:05.999958254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.305283134s" Aug 13 07:12:06.000142 containerd[1466]: time="2025-08-13T07:12:06.000008833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:12:06.003317 containerd[1466]: time="2025-08-13T07:12:06.002910422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:12:06.026257 containerd[1466]: time="2025-08-13T07:12:06.026197716Z" level=info msg="CreateContainer within sandbox \"f4b5cac5f5489e775c54aa65e24d7a78011118dc3cd8aef783fe348344e06c79\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:12:06.049966 containerd[1466]: time="2025-08-13T07:12:06.049909246Z" level=info msg="CreateContainer within sandbox \"f4b5cac5f5489e775c54aa65e24d7a78011118dc3cd8aef783fe348344e06c79\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"661d92d6cdb8b450d20eac9f5b32d76bb166c40fe7dfbaa0609a5e4a7a346a2f\"" Aug 13 07:12:06.050781 containerd[1466]: time="2025-08-13T07:12:06.050727032Z" level=info msg="StartContainer for \"661d92d6cdb8b450d20eac9f5b32d76bb166c40fe7dfbaa0609a5e4a7a346a2f\"" Aug 13 07:12:06.116487 systemd[1]: Started cri-containerd-661d92d6cdb8b450d20eac9f5b32d76bb166c40fe7dfbaa0609a5e4a7a346a2f.scope - libcontainer container 661d92d6cdb8b450d20eac9f5b32d76bb166c40fe7dfbaa0609a5e4a7a346a2f. Aug 13 07:12:06.193942 containerd[1466]: time="2025-08-13T07:12:06.193643485Z" level=info msg="StartContainer for \"661d92d6cdb8b450d20eac9f5b32d76bb166c40fe7dfbaa0609a5e4a7a346a2f\" returns successfully" Aug 13 07:12:06.559424 kubelet[2615]: E0813 07:12:06.559376 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.559424 kubelet[2615]: W0813 07:12:06.559416 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.560356 kubelet[2615]: E0813 07:12:06.559452 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.560356 kubelet[2615]: E0813 07:12:06.559892 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.560356 kubelet[2615]: W0813 07:12:06.559912 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.560356 kubelet[2615]: E0813 07:12:06.559934 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.560356 kubelet[2615]: E0813 07:12:06.560281 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.560356 kubelet[2615]: W0813 07:12:06.560297 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.560356 kubelet[2615]: E0813 07:12:06.560316 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.560959 kubelet[2615]: E0813 07:12:06.560715 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.560959 kubelet[2615]: W0813 07:12:06.560731 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.560959 kubelet[2615]: E0813 07:12:06.560750 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.563614 kubelet[2615]: E0813 07:12:06.563577 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.563614 kubelet[2615]: W0813 07:12:06.563604 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.563814 kubelet[2615]: E0813 07:12:06.563625 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.563992 kubelet[2615]: E0813 07:12:06.563968 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.563992 kubelet[2615]: W0813 07:12:06.563991 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.564148 kubelet[2615]: E0813 07:12:06.564010 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.564372 kubelet[2615]: E0813 07:12:06.564351 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.564372 kubelet[2615]: W0813 07:12:06.564371 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.564524 kubelet[2615]: E0813 07:12:06.564388 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.564747 kubelet[2615]: E0813 07:12:06.564724 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.564747 kubelet[2615]: W0813 07:12:06.564745 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.564890 kubelet[2615]: E0813 07:12:06.564763 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.565195 kubelet[2615]: E0813 07:12:06.565145 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.565195 kubelet[2615]: W0813 07:12:06.565181 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.565346 kubelet[2615]: E0813 07:12:06.565200 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.565560 kubelet[2615]: E0813 07:12:06.565534 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.565560 kubelet[2615]: W0813 07:12:06.565558 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.565721 kubelet[2615]: E0813 07:12:06.565575 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.566035 kubelet[2615]: E0813 07:12:06.566009 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.566035 kubelet[2615]: W0813 07:12:06.566033 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.566201 kubelet[2615]: E0813 07:12:06.566051 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.567318 kubelet[2615]: E0813 07:12:06.567290 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.567318 kubelet[2615]: W0813 07:12:06.567317 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.567470 kubelet[2615]: E0813 07:12:06.567335 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.568351 kubelet[2615]: E0813 07:12:06.568322 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.568351 kubelet[2615]: W0813 07:12:06.568347 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.568558 kubelet[2615]: E0813 07:12:06.568365 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.568739 kubelet[2615]: E0813 07:12:06.568715 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.568739 kubelet[2615]: W0813 07:12:06.568740 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.568869 kubelet[2615]: E0813 07:12:06.568757 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.570498 kubelet[2615]: E0813 07:12:06.570469 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.570601 kubelet[2615]: W0813 07:12:06.570494 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.570601 kubelet[2615]: E0813 07:12:06.570544 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.612409 kubelet[2615]: E0813 07:12:06.612207 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.612409 kubelet[2615]: W0813 07:12:06.612240 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.612409 kubelet[2615]: E0813 07:12:06.612298 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.613301 kubelet[2615]: E0813 07:12:06.613272 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.613301 kubelet[2615]: W0813 07:12:06.613300 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.613614 kubelet[2615]: E0813 07:12:06.613326 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.615512 kubelet[2615]: E0813 07:12:06.615482 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.615512 kubelet[2615]: W0813 07:12:06.615511 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.615738 kubelet[2615]: E0813 07:12:06.615534 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.616054 kubelet[2615]: E0813 07:12:06.616030 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.616054 kubelet[2615]: W0813 07:12:06.616053 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.616302 kubelet[2615]: E0813 07:12:06.616074 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.616523 kubelet[2615]: E0813 07:12:06.616502 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.616523 kubelet[2615]: W0813 07:12:06.616523 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.616666 kubelet[2615]: E0813 07:12:06.616543 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.616961 kubelet[2615]: E0813 07:12:06.616936 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.617039 kubelet[2615]: W0813 07:12:06.616962 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.617039 kubelet[2615]: E0813 07:12:06.616981 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.617474 kubelet[2615]: E0813 07:12:06.617450 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.617552 kubelet[2615]: W0813 07:12:06.617471 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.617552 kubelet[2615]: E0813 07:12:06.617498 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.618583 kubelet[2615]: E0813 07:12:06.618556 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.618583 kubelet[2615]: W0813 07:12:06.618580 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.618745 kubelet[2615]: E0813 07:12:06.618598 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.619366 kubelet[2615]: E0813 07:12:06.619328 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.619366 kubelet[2615]: W0813 07:12:06.619351 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.619366 kubelet[2615]: E0813 07:12:06.619369 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.621909 kubelet[2615]: E0813 07:12:06.621877 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.621909 kubelet[2615]: W0813 07:12:06.621905 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.622111 kubelet[2615]: E0813 07:12:06.621928 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.623389 kubelet[2615]: E0813 07:12:06.623361 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.623389 kubelet[2615]: W0813 07:12:06.623385 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.623543 kubelet[2615]: E0813 07:12:06.623405 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.623869 kubelet[2615]: E0813 07:12:06.623826 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.623869 kubelet[2615]: W0813 07:12:06.623842 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.623869 kubelet[2615]: E0813 07:12:06.623859 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.626308 kubelet[2615]: E0813 07:12:06.626282 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.626308 kubelet[2615]: W0813 07:12:06.626307 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.626717 kubelet[2615]: E0813 07:12:06.626327 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.627157 kubelet[2615]: E0813 07:12:06.627130 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.627157 kubelet[2615]: W0813 07:12:06.627154 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.627519 kubelet[2615]: E0813 07:12:06.627187 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.627591 kubelet[2615]: E0813 07:12:06.627551 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.627591 kubelet[2615]: W0813 07:12:06.627565 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.627591 kubelet[2615]: E0813 07:12:06.627584 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.629773 kubelet[2615]: E0813 07:12:06.629511 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.629773 kubelet[2615]: W0813 07:12:06.629531 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.629773 kubelet[2615]: E0813 07:12:06.629549 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.630267 kubelet[2615]: E0813 07:12:06.630243 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.630267 kubelet[2615]: W0813 07:12:06.630266 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.630390 kubelet[2615]: E0813 07:12:06.630283 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:06.632352 kubelet[2615]: E0813 07:12:06.632319 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:06.632352 kubelet[2615]: W0813 07:12:06.632343 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:06.632509 kubelet[2615]: E0813 07:12:06.632361 2615 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:07.158041 containerd[1466]: time="2025-08-13T07:12:07.157969332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:07.159722 containerd[1466]: time="2025-08-13T07:12:07.159615777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:12:07.161198 containerd[1466]: time="2025-08-13T07:12:07.161079794Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:07.164071 containerd[1466]: time="2025-08-13T07:12:07.163989224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:07.165119 containerd[1466]: time="2025-08-13T07:12:07.164916457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.161960288s" Aug 13 07:12:07.165119 containerd[1466]: time="2025-08-13T07:12:07.164967788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:12:07.171646 containerd[1466]: time="2025-08-13T07:12:07.171583195Z" level=info msg="CreateContainer within sandbox \"85a0279e53b88db00dbdddf49285656c6aaca69c695ad7c64cda4678d1ebc446\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:12:07.192813 containerd[1466]: time="2025-08-13T07:12:07.192734828Z" level=info msg="CreateContainer within sandbox \"85a0279e53b88db00dbdddf49285656c6aaca69c695ad7c64cda4678d1ebc446\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6113acf689f10ab815d0ad4b31a9d7705a1d87a55ceda1f53f9e60a003b6970b\"" Aug 13 07:12:07.195220 containerd[1466]: time="2025-08-13T07:12:07.193561594Z" level=info msg="StartContainer for \"6113acf689f10ab815d0ad4b31a9d7705a1d87a55ceda1f53f9e60a003b6970b\"" Aug 13 07:12:07.241458 systemd[1]: Started cri-containerd-6113acf689f10ab815d0ad4b31a9d7705a1d87a55ceda1f53f9e60a003b6970b.scope - libcontainer container 6113acf689f10ab815d0ad4b31a9d7705a1d87a55ceda1f53f9e60a003b6970b. Aug 13 07:12:07.283314 containerd[1466]: time="2025-08-13T07:12:07.282136983Z" level=info msg="StartContainer for \"6113acf689f10ab815d0ad4b31a9d7705a1d87a55ceda1f53f9e60a003b6970b\" returns successfully" Aug 13 07:12:07.300408 systemd[1]: cri-containerd-6113acf689f10ab815d0ad4b31a9d7705a1d87a55ceda1f53f9e60a003b6970b.scope: Deactivated successfully. Aug 13 07:12:07.324463 kubelet[2615]: E0813 07:12:07.323966 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-phmj8" podUID="68f698fa-855d-4911-b071-8b5d910e948d" Aug 13 07:12:07.342679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6113acf689f10ab815d0ad4b31a9d7705a1d87a55ceda1f53f9e60a003b6970b-rootfs.mount: Deactivated successfully. Aug 13 07:12:07.483146 kubelet[2615]: I0813 07:12:07.482773 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:12:07.505493 kubelet[2615]: I0813 07:12:07.505294 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c57479c98-v2wn2" podStartSLOduration=2.197774966 podStartE2EDuration="4.505264483s" podCreationTimestamp="2025-08-13 07:12:03 +0000 UTC" firstStartedPulling="2025-08-13 07:12:03.694023142 +0000 UTC m=+21.684926737" lastFinishedPulling="2025-08-13 07:12:06.00151264 +0000 UTC m=+23.992416254" observedRunningTime="2025-08-13 07:12:06.502783469 +0000 UTC m=+24.493687089" watchObservedRunningTime="2025-08-13 07:12:07.505264483 +0000 UTC m=+25.496168100" Aug 13 07:12:08.016376 containerd[1466]: time="2025-08-13T07:12:08.016289888Z" level=info msg="shim disconnected" id=6113acf689f10ab815d0ad4b31a9d7705a1d87a55ceda1f53f9e60a003b6970b namespace=k8s.io Aug 13 07:12:08.016674 containerd[1466]: time="2025-08-13T07:12:08.016436950Z" level=warning msg="cleaning up after shim disconnected" id=6113acf689f10ab815d0ad4b31a9d7705a1d87a55ceda1f53f9e60a003b6970b namespace=k8s.io Aug 13 07:12:08.016674 containerd[1466]: time="2025-08-13T07:12:08.016457725Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:08.490323 containerd[1466]: time="2025-08-13T07:12:08.489543669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:12:09.323349 kubelet[2615]: E0813 07:12:09.323258 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-phmj8" podUID="68f698fa-855d-4911-b071-8b5d910e948d" Aug 13 07:12:11.322967 kubelet[2615]: E0813 07:12:11.322894 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-phmj8" podUID="68f698fa-855d-4911-b071-8b5d910e948d" Aug 13 07:12:11.915949 containerd[1466]: time="2025-08-13T07:12:11.915877637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:11.917286 containerd[1466]: time="2025-08-13T07:12:11.917206154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:12:11.918761 containerd[1466]: time="2025-08-13T07:12:11.918688357Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:11.921927 containerd[1466]: time="2025-08-13T07:12:11.921880544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:11.923224 containerd[1466]: time="2025-08-13T07:12:11.923017067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.433408721s" Aug 13 07:12:11.923224 containerd[1466]: time="2025-08-13T07:12:11.923065234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:12:11.929480 containerd[1466]: time="2025-08-13T07:12:11.929427540Z" level=info msg="CreateContainer within sandbox \"85a0279e53b88db00dbdddf49285656c6aaca69c695ad7c64cda4678d1ebc446\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:12:11.954791 containerd[1466]: time="2025-08-13T07:12:11.954732845Z" level=info msg="CreateContainer within sandbox \"85a0279e53b88db00dbdddf49285656c6aaca69c695ad7c64cda4678d1ebc446\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2413c93e908fe9fdc490526e08e6de3dd1bdf89e50f8f116cbf42faa92a18b18\"" Aug 13 07:12:11.955607 containerd[1466]: time="2025-08-13T07:12:11.955564391Z" level=info msg="StartContainer for \"2413c93e908fe9fdc490526e08e6de3dd1bdf89e50f8f116cbf42faa92a18b18\"" Aug 13 07:12:12.008285 systemd[1]: run-containerd-runc-k8s.io-2413c93e908fe9fdc490526e08e6de3dd1bdf89e50f8f116cbf42faa92a18b18-runc.25JrCq.mount: Deactivated successfully. Aug 13 07:12:12.015730 systemd[1]: Started cri-containerd-2413c93e908fe9fdc490526e08e6de3dd1bdf89e50f8f116cbf42faa92a18b18.scope - libcontainer container 2413c93e908fe9fdc490526e08e6de3dd1bdf89e50f8f116cbf42faa92a18b18. Aug 13 07:12:12.062797 containerd[1466]: time="2025-08-13T07:12:12.062733917Z" level=info msg="StartContainer for \"2413c93e908fe9fdc490526e08e6de3dd1bdf89e50f8f116cbf42faa92a18b18\" returns successfully" Aug 13 07:12:12.332779 kubelet[2615]: I0813 07:12:12.331899 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:12:13.097728 systemd[1]: cri-containerd-2413c93e908fe9fdc490526e08e6de3dd1bdf89e50f8f116cbf42faa92a18b18.scope: Deactivated successfully. Aug 13 07:12:13.133326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2413c93e908fe9fdc490526e08e6de3dd1bdf89e50f8f116cbf42faa92a18b18-rootfs.mount: Deactivated successfully. Aug 13 07:12:13.166083 kubelet[2615]: I0813 07:12:13.164954 2615 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 07:12:13.471627 kubelet[2615]: I0813 07:12:13.471461 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rjml\" (UniqueName: \"kubernetes.io/projected/9dd82a74-cb79-405c-9c40-0ccdbc701a0f-kube-api-access-2rjml\") pod \"calico-kube-controllers-d489544c6-rtpdd\" (UID: \"9dd82a74-cb79-405c-9c40-0ccdbc701a0f\") " pod="calico-system/calico-kube-controllers-d489544c6-rtpdd" Aug 13 07:12:13.597026 kubelet[2615]: I0813 07:12:13.471669 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9dd82a74-cb79-405c-9c40-0ccdbc701a0f-tigera-ca-bundle\") pod \"calico-kube-controllers-d489544c6-rtpdd\" (UID: \"9dd82a74-cb79-405c-9c40-0ccdbc701a0f\") " pod="calico-system/calico-kube-controllers-d489544c6-rtpdd" Aug 13 07:12:13.639325 systemd[1]: Created slice kubepods-besteffort-pod9dd82a74_cb79_405c_9c40_0ccdbc701a0f.slice - libcontainer container kubepods-besteffort-pod9dd82a74_cb79_405c_9c40_0ccdbc701a0f.slice. Aug 13 07:12:13.645163 containerd[1466]: time="2025-08-13T07:12:13.644617354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d489544c6-rtpdd,Uid:9dd82a74-cb79-405c-9c40-0ccdbc701a0f,Namespace:calico-system,Attempt:0,}" Aug 13 07:12:13.653606 systemd[1]: Created slice kubepods-besteffort-pod68f698fa_855d_4911_b071_8b5d910e948d.slice - libcontainer container kubepods-besteffort-pod68f698fa_855d_4911_b071_8b5d910e948d.slice. Aug 13 07:12:13.658009 containerd[1466]: time="2025-08-13T07:12:13.657780527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-phmj8,Uid:68f698fa-855d-4911-b071-8b5d910e948d,Namespace:calico-system,Attempt:0,}" Aug 13 07:12:13.659942 systemd[1]: Created slice kubepods-burstable-pod824f485a_87b0_420d_9975_44d490a376b1.slice - libcontainer container kubepods-burstable-pod824f485a_87b0_420d_9975_44d490a376b1.slice. Aug 13 07:12:13.673946 kubelet[2615]: I0813 07:12:13.673493 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/824f485a-87b0-420d-9975-44d490a376b1-config-volume\") pod \"coredns-674b8bbfcf-cmvkg\" (UID: \"824f485a-87b0-420d-9975-44d490a376b1\") " pod="kube-system/coredns-674b8bbfcf-cmvkg" Aug 13 07:12:13.673946 kubelet[2615]: I0813 07:12:13.673564 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnmx6\" (UniqueName: \"kubernetes.io/projected/824f485a-87b0-420d-9975-44d490a376b1-kube-api-access-bnmx6\") pod \"coredns-674b8bbfcf-cmvkg\" (UID: \"824f485a-87b0-420d-9975-44d490a376b1\") " pod="kube-system/coredns-674b8bbfcf-cmvkg" Aug 13 07:12:13.694820 containerd[1466]: time="2025-08-13T07:12:13.694412192Z" level=info msg="shim disconnected" id=2413c93e908fe9fdc490526e08e6de3dd1bdf89e50f8f116cbf42faa92a18b18 namespace=k8s.io Aug 13 07:12:13.695433 containerd[1466]: time="2025-08-13T07:12:13.695046572Z" level=warning msg="cleaning up after shim disconnected" id=2413c93e908fe9fdc490526e08e6de3dd1bdf89e50f8f116cbf42faa92a18b18 namespace=k8s.io Aug 13 07:12:13.695791 containerd[1466]: time="2025-08-13T07:12:13.695350617Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:13.771825 systemd[1]: Created slice kubepods-burstable-pod89495aa1_ee61_41c8_8f49_33992f3f9e26.slice - libcontainer container kubepods-burstable-pod89495aa1_ee61_41c8_8f49_33992f3f9e26.slice. Aug 13 07:12:13.786342 kubelet[2615]: I0813 07:12:13.786296 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89495aa1-ee61-41c8-8f49-33992f3f9e26-config-volume\") pod \"coredns-674b8bbfcf-82cfm\" (UID: \"89495aa1-ee61-41c8-8f49-33992f3f9e26\") " pod="kube-system/coredns-674b8bbfcf-82cfm" Aug 13 07:12:13.786342 kubelet[2615]: I0813 07:12:13.786355 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb2b53d-86e2-4e78-ad45-c6b1da7fe653-config\") pod \"goldmane-768f4c5c69-5mqk4\" (UID: \"1cb2b53d-86e2-4e78-ad45-c6b1da7fe653\") " pod="calico-system/goldmane-768f4c5c69-5mqk4" Aug 13 07:12:13.786601 kubelet[2615]: I0813 07:12:13.786400 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkwd6\" (UniqueName: \"kubernetes.io/projected/89495aa1-ee61-41c8-8f49-33992f3f9e26-kube-api-access-bkwd6\") pod \"coredns-674b8bbfcf-82cfm\" (UID: \"89495aa1-ee61-41c8-8f49-33992f3f9e26\") " pod="kube-system/coredns-674b8bbfcf-82cfm" Aug 13 07:12:13.786601 kubelet[2615]: I0813 07:12:13.786427 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1cb2b53d-86e2-4e78-ad45-c6b1da7fe653-goldmane-key-pair\") pod \"goldmane-768f4c5c69-5mqk4\" (UID: \"1cb2b53d-86e2-4e78-ad45-c6b1da7fe653\") " pod="calico-system/goldmane-768f4c5c69-5mqk4" Aug 13 07:12:13.786601 kubelet[2615]: I0813 07:12:13.786450 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gf9m\" (UniqueName: \"kubernetes.io/projected/1cb2b53d-86e2-4e78-ad45-c6b1da7fe653-kube-api-access-8gf9m\") pod \"goldmane-768f4c5c69-5mqk4\" (UID: \"1cb2b53d-86e2-4e78-ad45-c6b1da7fe653\") " pod="calico-system/goldmane-768f4c5c69-5mqk4" Aug 13 07:12:13.786601 kubelet[2615]: I0813 07:12:13.786477 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/119aced9-7cb0-4d80-8364-7168248c339c-calico-apiserver-certs\") pod \"calico-apiserver-7549bfdd56-dfv7p\" (UID: \"119aced9-7cb0-4d80-8364-7168248c339c\") " pod="calico-apiserver/calico-apiserver-7549bfdd56-dfv7p" Aug 13 07:12:13.786601 kubelet[2615]: I0813 07:12:13.786541 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnwqs\" (UniqueName: \"kubernetes.io/projected/714ca8a9-a538-482c-a68a-c0ed3f848627-kube-api-access-tnwqs\") pod \"calico-apiserver-7549bfdd56-x7dn2\" (UID: \"714ca8a9-a538-482c-a68a-c0ed3f848627\") " pod="calico-apiserver/calico-apiserver-7549bfdd56-x7dn2" Aug 13 07:12:13.786856 kubelet[2615]: I0813 07:12:13.786572 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrxsv\" (UniqueName: \"kubernetes.io/projected/119aced9-7cb0-4d80-8364-7168248c339c-kube-api-access-mrxsv\") pod \"calico-apiserver-7549bfdd56-dfv7p\" (UID: \"119aced9-7cb0-4d80-8364-7168248c339c\") " pod="calico-apiserver/calico-apiserver-7549bfdd56-dfv7p" Aug 13 07:12:13.786856 kubelet[2615]: I0813 07:12:13.786603 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cb2b53d-86e2-4e78-ad45-c6b1da7fe653-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-5mqk4\" (UID: \"1cb2b53d-86e2-4e78-ad45-c6b1da7fe653\") " pod="calico-system/goldmane-768f4c5c69-5mqk4" Aug 13 07:12:13.786856 kubelet[2615]: I0813 07:12:13.786636 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/714ca8a9-a538-482c-a68a-c0ed3f848627-calico-apiserver-certs\") pod \"calico-apiserver-7549bfdd56-x7dn2\" (UID: \"714ca8a9-a538-482c-a68a-c0ed3f848627\") " pod="calico-apiserver/calico-apiserver-7549bfdd56-x7dn2" Aug 13 07:12:13.811819 systemd[1]: Created slice kubepods-besteffort-pod714ca8a9_a538_482c_a68a_c0ed3f848627.slice - libcontainer container kubepods-besteffort-pod714ca8a9_a538_482c_a68a_c0ed3f848627.slice. Aug 13 07:12:13.861937 systemd[1]: Created slice kubepods-besteffort-pod119aced9_7cb0_4d80_8364_7168248c339c.slice - libcontainer container kubepods-besteffort-pod119aced9_7cb0_4d80_8364_7168248c339c.slice. Aug 13 07:12:13.878994 systemd[1]: Created slice kubepods-besteffort-pod1cb2b53d_86e2_4e78_ad45_c6b1da7fe653.slice - libcontainer container kubepods-besteffort-pod1cb2b53d_86e2_4e78_ad45_c6b1da7fe653.slice. Aug 13 07:12:13.897197 kubelet[2615]: I0813 07:12:13.896744 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b439bd5c-19e5-49c8-9c88-5a4a70429a74-whisker-backend-key-pair\") pod \"whisker-79f55c7ffd-cs8ql\" (UID: \"b439bd5c-19e5-49c8-9c88-5a4a70429a74\") " pod="calico-system/whisker-79f55c7ffd-cs8ql" Aug 13 07:12:13.897197 kubelet[2615]: I0813 07:12:13.896899 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b439bd5c-19e5-49c8-9c88-5a4a70429a74-whisker-ca-bundle\") pod \"whisker-79f55c7ffd-cs8ql\" (UID: \"b439bd5c-19e5-49c8-9c88-5a4a70429a74\") " pod="calico-system/whisker-79f55c7ffd-cs8ql" Aug 13 07:12:13.898414 kubelet[2615]: I0813 07:12:13.898241 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnq2d\" (UniqueName: \"kubernetes.io/projected/b439bd5c-19e5-49c8-9c88-5a4a70429a74-kube-api-access-cnq2d\") pod \"whisker-79f55c7ffd-cs8ql\" (UID: \"b439bd5c-19e5-49c8-9c88-5a4a70429a74\") " pod="calico-system/whisker-79f55c7ffd-cs8ql" Aug 13 07:12:13.923618 systemd[1]: Created slice kubepods-besteffort-podb439bd5c_19e5_49c8_9c88_5a4a70429a74.slice - libcontainer container kubepods-besteffort-podb439bd5c_19e5_49c8_9c88_5a4a70429a74.slice. Aug 13 07:12:13.970537 containerd[1466]: time="2025-08-13T07:12:13.970227524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cmvkg,Uid:824f485a-87b0-420d-9975-44d490a376b1,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:14.057325 containerd[1466]: time="2025-08-13T07:12:14.055884332Z" level=error msg="Failed to destroy network for sandbox \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.057735 containerd[1466]: time="2025-08-13T07:12:14.057528363Z" level=error msg="encountered an error cleaning up failed sandbox \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.058188 containerd[1466]: time="2025-08-13T07:12:14.057848504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d489544c6-rtpdd,Uid:9dd82a74-cb79-405c-9c40-0ccdbc701a0f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.059232 kubelet[2615]: E0813 07:12:14.058621 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.059232 kubelet[2615]: E0813 07:12:14.058713 2615 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d489544c6-rtpdd" Aug 13 07:12:14.059232 kubelet[2615]: E0813 07:12:14.058749 2615 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d489544c6-rtpdd" Aug 13 07:12:14.059482 kubelet[2615]: E0813 07:12:14.058835 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d489544c6-rtpdd_calico-system(9dd82a74-cb79-405c-9c40-0ccdbc701a0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d489544c6-rtpdd_calico-system(9dd82a74-cb79-405c-9c40-0ccdbc701a0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d489544c6-rtpdd" podUID="9dd82a74-cb79-405c-9c40-0ccdbc701a0f" Aug 13 07:12:14.063197 containerd[1466]: time="2025-08-13T07:12:14.062807662Z" level=error msg="Failed to destroy network for sandbox \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.063660 containerd[1466]: time="2025-08-13T07:12:14.063614344Z" level=error msg="encountered an error cleaning up failed sandbox \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.063796 containerd[1466]: time="2025-08-13T07:12:14.063730236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-phmj8,Uid:68f698fa-855d-4911-b071-8b5d910e948d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.064249 kubelet[2615]: E0813 07:12:14.064193 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.064917 kubelet[2615]: E0813 07:12:14.064282 2615 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-phmj8" Aug 13 07:12:14.064917 kubelet[2615]: E0813 07:12:14.064316 2615 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-phmj8" Aug 13 07:12:14.064917 kubelet[2615]: E0813 07:12:14.064395 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-phmj8_calico-system(68f698fa-855d-4911-b071-8b5d910e948d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-phmj8_calico-system(68f698fa-855d-4911-b071-8b5d910e948d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-phmj8" podUID="68f698fa-855d-4911-b071-8b5d910e948d" Aug 13 07:12:14.108924 containerd[1466]: time="2025-08-13T07:12:14.108855288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-82cfm,Uid:89495aa1-ee61-41c8-8f49-33992f3f9e26,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:14.126937 containerd[1466]: time="2025-08-13T07:12:14.126884556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7549bfdd56-x7dn2,Uid:714ca8a9-a538-482c-a68a-c0ed3f848627,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:12:14.142712 containerd[1466]: time="2025-08-13T07:12:14.142650307Z" level=error msg="Failed to destroy network for sandbox \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.144915 containerd[1466]: time="2025-08-13T07:12:14.144858592Z" level=error msg="encountered an error cleaning up failed sandbox \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.145057 containerd[1466]: time="2025-08-13T07:12:14.144957472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cmvkg,Uid:824f485a-87b0-420d-9975-44d490a376b1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.148263 kubelet[2615]: E0813 07:12:14.147491 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.148263 kubelet[2615]: E0813 07:12:14.147625 2615 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cmvkg" Aug 13 07:12:14.148263 kubelet[2615]: E0813 07:12:14.147664 2615 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cmvkg" Aug 13 07:12:14.148516 kubelet[2615]: E0813 07:12:14.147750 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cmvkg_kube-system(824f485a-87b0-420d-9975-44d490a376b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cmvkg_kube-system(824f485a-87b0-420d-9975-44d490a376b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cmvkg" podUID="824f485a-87b0-420d-9975-44d490a376b1" Aug 13 07:12:14.174755 containerd[1466]: time="2025-08-13T07:12:14.174709259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7549bfdd56-dfv7p,Uid:119aced9-7cb0-4d80-8364-7168248c339c,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:12:14.187788 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726-shm.mount: Deactivated successfully. Aug 13 07:12:14.187965 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9-shm.mount: Deactivated successfully. Aug 13 07:12:14.211848 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254-shm.mount: Deactivated successfully. Aug 13 07:12:14.220596 containerd[1466]: time="2025-08-13T07:12:14.220540064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-5mqk4,Uid:1cb2b53d-86e2-4e78-ad45-c6b1da7fe653,Namespace:calico-system,Attempt:0,}" Aug 13 07:12:14.324124 containerd[1466]: time="2025-08-13T07:12:14.323979943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79f55c7ffd-cs8ql,Uid:b439bd5c-19e5-49c8-9c88-5a4a70429a74,Namespace:calico-system,Attempt:0,}" Aug 13 07:12:14.442203 containerd[1466]: time="2025-08-13T07:12:14.441681704Z" level=error msg="Failed to destroy network for sandbox \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.442367 containerd[1466]: time="2025-08-13T07:12:14.442213838Z" level=error msg="encountered an error cleaning up failed sandbox \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.442367 containerd[1466]: time="2025-08-13T07:12:14.442303875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-82cfm,Uid:89495aa1-ee61-41c8-8f49-33992f3f9e26,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.442735 kubelet[2615]: E0813 07:12:14.442685 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.443133 kubelet[2615]: E0813 07:12:14.443041 2615 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-82cfm" Aug 13 07:12:14.445385 kubelet[2615]: E0813 07:12:14.443089 2615 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-82cfm" Aug 13 07:12:14.445385 kubelet[2615]: E0813 07:12:14.443986 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-82cfm_kube-system(89495aa1-ee61-41c8-8f49-33992f3f9e26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-82cfm_kube-system(89495aa1-ee61-41c8-8f49-33992f3f9e26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-82cfm" podUID="89495aa1-ee61-41c8-8f49-33992f3f9e26" Aug 13 07:12:14.498203 containerd[1466]: time="2025-08-13T07:12:14.498112454Z" level=error msg="Failed to destroy network for sandbox \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.499748 containerd[1466]: time="2025-08-13T07:12:14.498825648Z" level=error msg="encountered an error cleaning up failed sandbox \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.499748 containerd[1466]: time="2025-08-13T07:12:14.498905331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7549bfdd56-dfv7p,Uid:119aced9-7cb0-4d80-8364-7168248c339c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.500003 kubelet[2615]: E0813 07:12:14.499223 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.500003 kubelet[2615]: E0813 07:12:14.499305 2615 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7549bfdd56-dfv7p" Aug 13 07:12:14.500003 kubelet[2615]: E0813 07:12:14.499341 2615 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7549bfdd56-dfv7p" Aug 13 07:12:14.500622 kubelet[2615]: E0813 07:12:14.499417 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7549bfdd56-dfv7p_calico-apiserver(119aced9-7cb0-4d80-8364-7168248c339c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7549bfdd56-dfv7p_calico-apiserver(119aced9-7cb0-4d80-8364-7168248c339c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7549bfdd56-dfv7p" podUID="119aced9-7cb0-4d80-8364-7168248c339c" Aug 13 07:12:14.518284 containerd[1466]: time="2025-08-13T07:12:14.516851385Z" level=error msg="Failed to destroy network for sandbox \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.521593 kubelet[2615]: I0813 07:12:14.520365 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Aug 13 07:12:14.523371 containerd[1466]: time="2025-08-13T07:12:14.522483857Z" level=info msg="StopPodSandbox for \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\"" Aug 13 07:12:14.523371 containerd[1466]: time="2025-08-13T07:12:14.522744995Z" level=info msg="Ensure that sandbox 5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1 in task-service has been cleanup successfully" Aug 13 07:12:14.526904 kubelet[2615]: I0813 07:12:14.526684 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Aug 13 07:12:14.528617 containerd[1466]: time="2025-08-13T07:12:14.528571402Z" level=error msg="encountered an error cleaning up failed sandbox \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.532252 containerd[1466]: time="2025-08-13T07:12:14.532096143Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7549bfdd56-x7dn2,Uid:714ca8a9-a538-482c-a68a-c0ed3f848627,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.533063 containerd[1466]: time="2025-08-13T07:12:14.530433309Z" level=info msg="StopPodSandbox for \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\"" Aug 13 07:12:14.534364 containerd[1466]: time="2025-08-13T07:12:14.533658557Z" level=info msg="Ensure that sandbox 769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8 in task-service has been cleanup successfully" Aug 13 07:12:14.534530 kubelet[2615]: E0813 07:12:14.534066 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.534530 kubelet[2615]: E0813 07:12:14.534129 2615 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7549bfdd56-x7dn2" Aug 13 07:12:14.534530 kubelet[2615]: E0813 07:12:14.534165 2615 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7549bfdd56-x7dn2" Aug 13 07:12:14.534727 kubelet[2615]: E0813 07:12:14.534260 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7549bfdd56-x7dn2_calico-apiserver(714ca8a9-a538-482c-a68a-c0ed3f848627)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7549bfdd56-x7dn2_calico-apiserver(714ca8a9-a538-482c-a68a-c0ed3f848627)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7549bfdd56-x7dn2" podUID="714ca8a9-a538-482c-a68a-c0ed3f848627" Aug 13 07:12:14.539389 kubelet[2615]: I0813 07:12:14.539360 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Aug 13 07:12:14.540655 containerd[1466]: time="2025-08-13T07:12:14.540602013Z" level=info msg="StopPodSandbox for \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\"" Aug 13 07:12:14.544486 containerd[1466]: time="2025-08-13T07:12:14.544099432Z" level=info msg="Ensure that sandbox 3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726 in task-service has been cleanup successfully" Aug 13 07:12:14.560965 containerd[1466]: time="2025-08-13T07:12:14.560880889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:12:14.567959 kubelet[2615]: I0813 07:12:14.566156 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Aug 13 07:12:14.571689 containerd[1466]: time="2025-08-13T07:12:14.571545333Z" level=error msg="Failed to destroy network for sandbox \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.574553 containerd[1466]: time="2025-08-13T07:12:14.572270294Z" level=info msg="StopPodSandbox for \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\"" Aug 13 07:12:14.579392 containerd[1466]: time="2025-08-13T07:12:14.578865952Z" level=info msg="Ensure that sandbox 23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254 in task-service has been cleanup successfully" Aug 13 07:12:14.585269 containerd[1466]: time="2025-08-13T07:12:14.577895542Z" level=error msg="encountered an error cleaning up failed sandbox \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.585269 containerd[1466]: time="2025-08-13T07:12:14.583828615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-5mqk4,Uid:1cb2b53d-86e2-4e78-ad45-c6b1da7fe653,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.585522 kubelet[2615]: E0813 07:12:14.585339 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.585522 kubelet[2615]: E0813 07:12:14.585398 2615 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-5mqk4" Aug 13 07:12:14.585522 kubelet[2615]: E0813 07:12:14.585441 2615 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-5mqk4" Aug 13 07:12:14.587061 kubelet[2615]: E0813 07:12:14.586271 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-5mqk4_calico-system(1cb2b53d-86e2-4e78-ad45-c6b1da7fe653)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-5mqk4_calico-system(1cb2b53d-86e2-4e78-ad45-c6b1da7fe653)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-5mqk4" podUID="1cb2b53d-86e2-4e78-ad45-c6b1da7fe653" Aug 13 07:12:14.591562 kubelet[2615]: I0813 07:12:14.591410 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Aug 13 07:12:14.596596 containerd[1466]: time="2025-08-13T07:12:14.593747881Z" level=info msg="StopPodSandbox for \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\"" Aug 13 07:12:14.603839 containerd[1466]: time="2025-08-13T07:12:14.603430008Z" level=info msg="Ensure that sandbox 598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9 in task-service has been cleanup successfully" Aug 13 07:12:14.648602 containerd[1466]: time="2025-08-13T07:12:14.648539282Z" level=error msg="Failed to destroy network for sandbox \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.650047 containerd[1466]: time="2025-08-13T07:12:14.649652394Z" level=error msg="encountered an error cleaning up failed sandbox \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.650047 containerd[1466]: time="2025-08-13T07:12:14.649733103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79f55c7ffd-cs8ql,Uid:b439bd5c-19e5-49c8-9c88-5a4a70429a74,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.650492 kubelet[2615]: E0813 07:12:14.650025 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.650492 kubelet[2615]: E0813 07:12:14.650104 2615 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79f55c7ffd-cs8ql" Aug 13 07:12:14.650492 kubelet[2615]: E0813 07:12:14.650140 2615 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79f55c7ffd-cs8ql" Aug 13 07:12:14.651365 kubelet[2615]: E0813 07:12:14.651318 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79f55c7ffd-cs8ql_calico-system(b439bd5c-19e5-49c8-9c88-5a4a70429a74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79f55c7ffd-cs8ql_calico-system(b439bd5c-19e5-49c8-9c88-5a4a70429a74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79f55c7ffd-cs8ql" podUID="b439bd5c-19e5-49c8-9c88-5a4a70429a74" Aug 13 07:12:14.694487 containerd[1466]: time="2025-08-13T07:12:14.694407576Z" level=error msg="StopPodSandbox for \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\" failed" error="failed to destroy network for sandbox \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.695421 kubelet[2615]: E0813 07:12:14.695034 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Aug 13 07:12:14.695421 kubelet[2615]: E0813 07:12:14.695133 2615 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1"} Aug 13 07:12:14.695421 kubelet[2615]: E0813 07:12:14.695239 2615 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"119aced9-7cb0-4d80-8364-7168248c339c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:14.695421 kubelet[2615]: E0813 07:12:14.695277 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"119aced9-7cb0-4d80-8364-7168248c339c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7549bfdd56-dfv7p" podUID="119aced9-7cb0-4d80-8364-7168248c339c" Aug 13 07:12:14.724492 containerd[1466]: time="2025-08-13T07:12:14.723649180Z" level=error msg="StopPodSandbox for \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\" failed" error="failed to destroy network for sandbox \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.724492 containerd[1466]: time="2025-08-13T07:12:14.723776588Z" level=error msg="StopPodSandbox for \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\" failed" error="failed to destroy network for sandbox \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.724743 kubelet[2615]: E0813 07:12:14.724077 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Aug 13 07:12:14.724743 kubelet[2615]: E0813 07:12:14.724147 2615 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8"} Aug 13 07:12:14.724743 kubelet[2615]: E0813 07:12:14.724223 2615 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89495aa1-ee61-41c8-8f49-33992f3f9e26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:14.724743 kubelet[2615]: E0813 07:12:14.724261 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89495aa1-ee61-41c8-8f49-33992f3f9e26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-82cfm" podUID="89495aa1-ee61-41c8-8f49-33992f3f9e26" Aug 13 07:12:14.726481 kubelet[2615]: E0813 07:12:14.724318 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Aug 13 07:12:14.726481 kubelet[2615]: E0813 07:12:14.724373 2615 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254"} Aug 13 07:12:14.726481 kubelet[2615]: E0813 07:12:14.724411 2615 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"824f485a-87b0-420d-9975-44d490a376b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:14.726481 kubelet[2615]: E0813 07:12:14.724447 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"824f485a-87b0-420d-9975-44d490a376b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cmvkg" podUID="824f485a-87b0-420d-9975-44d490a376b1" Aug 13 07:12:14.734290 containerd[1466]: time="2025-08-13T07:12:14.734233333Z" level=error msg="StopPodSandbox for \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\" failed" error="failed to destroy network for sandbox \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.734803 kubelet[2615]: E0813 07:12:14.734754 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Aug 13 07:12:14.735045 kubelet[2615]: E0813 07:12:14.735016 2615 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726"} Aug 13 07:12:14.735336 kubelet[2615]: E0813 07:12:14.735278 2615 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68f698fa-855d-4911-b071-8b5d910e948d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:14.735636 kubelet[2615]: E0813 07:12:14.735560 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68f698fa-855d-4911-b071-8b5d910e948d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-phmj8" podUID="68f698fa-855d-4911-b071-8b5d910e948d" Aug 13 07:12:14.738389 containerd[1466]: time="2025-08-13T07:12:14.737948426Z" level=error msg="StopPodSandbox for \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\" failed" error="failed to destroy network for sandbox \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:14.738533 kubelet[2615]: E0813 07:12:14.738202 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Aug 13 07:12:14.738533 kubelet[2615]: E0813 07:12:14.738254 2615 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9"} Aug 13 07:12:14.738533 kubelet[2615]: E0813 07:12:14.738303 2615 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9dd82a74-cb79-405c-9c40-0ccdbc701a0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:14.738533 kubelet[2615]: E0813 07:12:14.738343 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9dd82a74-cb79-405c-9c40-0ccdbc701a0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d489544c6-rtpdd" podUID="9dd82a74-cb79-405c-9c40-0ccdbc701a0f" Aug 13 07:12:15.133592 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3-shm.mount: Deactivated successfully. Aug 13 07:12:15.133730 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1-shm.mount: Deactivated successfully. Aug 13 07:12:15.133841 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614-shm.mount: Deactivated successfully. Aug 13 07:12:15.133955 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8-shm.mount: Deactivated successfully. Aug 13 07:12:15.595547 kubelet[2615]: I0813 07:12:15.595430 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Aug 13 07:12:15.597813 containerd[1466]: time="2025-08-13T07:12:15.597087720Z" level=info msg="StopPodSandbox for \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\"" Aug 13 07:12:15.597813 containerd[1466]: time="2025-08-13T07:12:15.597364063Z" level=info msg="Ensure that sandbox 8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084 in task-service has been cleanup successfully" Aug 13 07:12:15.599326 kubelet[2615]: I0813 07:12:15.599294 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Aug 13 07:12:15.603538 kubelet[2615]: I0813 07:12:15.602982 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Aug 13 07:12:15.603995 containerd[1466]: time="2025-08-13T07:12:15.603740667Z" level=info msg="StopPodSandbox for \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\"" Aug 13 07:12:15.603995 containerd[1466]: time="2025-08-13T07:12:15.604023117Z" level=info msg="Ensure that sandbox 3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3 in task-service has been cleanup successfully" Aug 13 07:12:15.605604 containerd[1466]: time="2025-08-13T07:12:15.605276930Z" level=info msg="StopPodSandbox for \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\"" Aug 13 07:12:15.605604 containerd[1466]: time="2025-08-13T07:12:15.605493460Z" level=info msg="Ensure that sandbox dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614 in task-service has been cleanup successfully" Aug 13 07:12:15.665205 containerd[1466]: time="2025-08-13T07:12:15.665077367Z" level=error msg="StopPodSandbox for \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\" failed" error="failed to destroy network for sandbox \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:15.667252 kubelet[2615]: E0813 07:12:15.666265 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Aug 13 07:12:15.667252 kubelet[2615]: E0813 07:12:15.666332 2615 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3"} Aug 13 07:12:15.667252 kubelet[2615]: E0813 07:12:15.666384 2615 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1cb2b53d-86e2-4e78-ad45-c6b1da7fe653\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:15.667252 kubelet[2615]: E0813 07:12:15.666422 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1cb2b53d-86e2-4e78-ad45-c6b1da7fe653\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-5mqk4" podUID="1cb2b53d-86e2-4e78-ad45-c6b1da7fe653" Aug 13 07:12:15.676127 containerd[1466]: time="2025-08-13T07:12:15.676046177Z" level=error msg="StopPodSandbox for \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\" failed" error="failed to destroy network for sandbox \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:15.676521 kubelet[2615]: E0813 07:12:15.676446 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Aug 13 07:12:15.676649 kubelet[2615]: E0813 07:12:15.676542 2615 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084"} Aug 13 07:12:15.676649 kubelet[2615]: E0813 07:12:15.676614 2615 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b439bd5c-19e5-49c8-9c88-5a4a70429a74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:15.676841 kubelet[2615]: E0813 07:12:15.676671 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b439bd5c-19e5-49c8-9c88-5a4a70429a74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79f55c7ffd-cs8ql" podUID="b439bd5c-19e5-49c8-9c88-5a4a70429a74" Aug 13 07:12:15.687466 containerd[1466]: time="2025-08-13T07:12:15.686753053Z" level=error msg="StopPodSandbox for \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\" failed" error="failed to destroy network for sandbox \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:15.687643 kubelet[2615]: E0813 07:12:15.687123 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Aug 13 07:12:15.687643 kubelet[2615]: E0813 07:12:15.687203 2615 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614"} Aug 13 07:12:15.687643 kubelet[2615]: E0813 07:12:15.687262 2615 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"714ca8a9-a538-482c-a68a-c0ed3f848627\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:15.687643 kubelet[2615]: E0813 07:12:15.687296 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"714ca8a9-a538-482c-a68a-c0ed3f848627\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7549bfdd56-x7dn2" podUID="714ca8a9-a538-482c-a68a-c0ed3f848627" Aug 13 07:12:21.793840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3611121743.mount: Deactivated successfully. Aug 13 07:12:21.831810 containerd[1466]: time="2025-08-13T07:12:21.831714249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:21.833324 containerd[1466]: time="2025-08-13T07:12:21.833229870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:12:21.834954 containerd[1466]: time="2025-08-13T07:12:21.834874482Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:21.840497 containerd[1466]: time="2025-08-13T07:12:21.839405474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:21.840497 containerd[1466]: time="2025-08-13T07:12:21.840289747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.276881352s" Aug 13 07:12:21.840497 containerd[1466]: time="2025-08-13T07:12:21.840333985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:12:21.865334 containerd[1466]: time="2025-08-13T07:12:21.865263519Z" level=info msg="CreateContainer within sandbox \"85a0279e53b88db00dbdddf49285656c6aaca69c695ad7c64cda4678d1ebc446\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:12:21.889074 containerd[1466]: time="2025-08-13T07:12:21.889022933Z" level=info msg="CreateContainer within sandbox \"85a0279e53b88db00dbdddf49285656c6aaca69c695ad7c64cda4678d1ebc446\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dc94d374990ff3163120ab09cca4a26823b9b1bc4c0567d206b4bf1fa3c4219b\"" Aug 13 07:12:21.890290 containerd[1466]: time="2025-08-13T07:12:21.890256206Z" level=info msg="StartContainer for \"dc94d374990ff3163120ab09cca4a26823b9b1bc4c0567d206b4bf1fa3c4219b\"" Aug 13 07:12:21.933495 systemd[1]: Started cri-containerd-dc94d374990ff3163120ab09cca4a26823b9b1bc4c0567d206b4bf1fa3c4219b.scope - libcontainer container dc94d374990ff3163120ab09cca4a26823b9b1bc4c0567d206b4bf1fa3c4219b. Aug 13 07:12:21.978859 containerd[1466]: time="2025-08-13T07:12:21.978807217Z" level=info msg="StartContainer for \"dc94d374990ff3163120ab09cca4a26823b9b1bc4c0567d206b4bf1fa3c4219b\" returns successfully" Aug 13 07:12:22.109384 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:12:22.109576 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:12:22.263771 containerd[1466]: time="2025-08-13T07:12:22.263695679Z" level=info msg="StopPodSandbox for \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\"" Aug 13 07:12:22.483058 containerd[1466]: 2025-08-13 07:12:22.393 [INFO][3812] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Aug 13 07:12:22.483058 containerd[1466]: 2025-08-13 07:12:22.393 [INFO][3812] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" iface="eth0" netns="/var/run/netns/cni-88abff72-b179-1a6e-483a-6a09de88332f" Aug 13 07:12:22.483058 containerd[1466]: 2025-08-13 07:12:22.395 [INFO][3812] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" iface="eth0" netns="/var/run/netns/cni-88abff72-b179-1a6e-483a-6a09de88332f" Aug 13 07:12:22.483058 containerd[1466]: 2025-08-13 07:12:22.395 [INFO][3812] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" iface="eth0" netns="/var/run/netns/cni-88abff72-b179-1a6e-483a-6a09de88332f" Aug 13 07:12:22.483058 containerd[1466]: 2025-08-13 07:12:22.395 [INFO][3812] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Aug 13 07:12:22.483058 containerd[1466]: 2025-08-13 07:12:22.395 [INFO][3812] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Aug 13 07:12:22.483058 containerd[1466]: 2025-08-13 07:12:22.452 [INFO][3821] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" HandleID="k8s-pod-network.8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--79f55c7ffd--cs8ql-eth0" Aug 13 07:12:22.483058 containerd[1466]: 2025-08-13 07:12:22.453 [INFO][3821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:22.483058 containerd[1466]: 2025-08-13 07:12:22.453 [INFO][3821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:22.483058 containerd[1466]: 2025-08-13 07:12:22.468 [WARNING][3821] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" HandleID="k8s-pod-network.8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--79f55c7ffd--cs8ql-eth0" Aug 13 07:12:22.483058 containerd[1466]: 2025-08-13 07:12:22.468 [INFO][3821] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" HandleID="k8s-pod-network.8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--79f55c7ffd--cs8ql-eth0" Aug 13 07:12:22.483058 containerd[1466]: 2025-08-13 07:12:22.473 [INFO][3821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:22.483058 containerd[1466]: 2025-08-13 07:12:22.478 [INFO][3812] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Aug 13 07:12:22.484188 containerd[1466]: time="2025-08-13T07:12:22.483328561Z" level=info msg="TearDown network for sandbox \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\" successfully" Aug 13 07:12:22.484188 containerd[1466]: time="2025-08-13T07:12:22.483371535Z" level=info msg="StopPodSandbox for \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\" returns successfully" Aug 13 07:12:22.582025 kubelet[2615]: I0813 07:12:22.581883 2615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b439bd5c-19e5-49c8-9c88-5a4a70429a74-whisker-ca-bundle\") pod \"b439bd5c-19e5-49c8-9c88-5a4a70429a74\" (UID: \"b439bd5c-19e5-49c8-9c88-5a4a70429a74\") " Aug 13 07:12:22.584317 kubelet[2615]: I0813 07:12:22.583012 2615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnq2d\" (UniqueName: \"kubernetes.io/projected/b439bd5c-19e5-49c8-9c88-5a4a70429a74-kube-api-access-cnq2d\") pod \"b439bd5c-19e5-49c8-9c88-5a4a70429a74\" (UID: \"b439bd5c-19e5-49c8-9c88-5a4a70429a74\") " Aug 13 07:12:22.584317 kubelet[2615]: I0813 07:12:22.583070 2615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b439bd5c-19e5-49c8-9c88-5a4a70429a74-whisker-backend-key-pair\") pod \"b439bd5c-19e5-49c8-9c88-5a4a70429a74\" (UID: \"b439bd5c-19e5-49c8-9c88-5a4a70429a74\") " Aug 13 07:12:22.584317 kubelet[2615]: I0813 07:12:22.582515 2615 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b439bd5c-19e5-49c8-9c88-5a4a70429a74-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b439bd5c-19e5-49c8-9c88-5a4a70429a74" (UID: "b439bd5c-19e5-49c8-9c88-5a4a70429a74"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:12:22.590447 kubelet[2615]: I0813 07:12:22.590361 2615 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b439bd5c-19e5-49c8-9c88-5a4a70429a74-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b439bd5c-19e5-49c8-9c88-5a4a70429a74" (UID: "b439bd5c-19e5-49c8-9c88-5a4a70429a74"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 07:12:22.593216 kubelet[2615]: I0813 07:12:22.592955 2615 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b439bd5c-19e5-49c8-9c88-5a4a70429a74-kube-api-access-cnq2d" (OuterVolumeSpecName: "kube-api-access-cnq2d") pod "b439bd5c-19e5-49c8-9c88-5a4a70429a74" (UID: "b439bd5c-19e5-49c8-9c88-5a4a70429a74"). InnerVolumeSpecName "kube-api-access-cnq2d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:12:22.645865 systemd[1]: Removed slice kubepods-besteffort-podb439bd5c_19e5_49c8_9c88_5a4a70429a74.slice - libcontainer container kubepods-besteffort-podb439bd5c_19e5_49c8_9c88_5a4a70429a74.slice. Aug 13 07:12:22.674208 kubelet[2615]: I0813 07:12:22.673687 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t4wpz" podStartSLOduration=1.773698336 podStartE2EDuration="19.673664744s" podCreationTimestamp="2025-08-13 07:12:03 +0000 UTC" firstStartedPulling="2025-08-13 07:12:03.941565007 +0000 UTC m=+21.932468604" lastFinishedPulling="2025-08-13 07:12:21.841531404 +0000 UTC m=+39.832435012" observedRunningTime="2025-08-13 07:12:22.672944412 +0000 UTC m=+40.663848043" watchObservedRunningTime="2025-08-13 07:12:22.673664744 +0000 UTC m=+40.664568359" Aug 13 07:12:22.687032 kubelet[2615]: I0813 07:12:22.684055 2615 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b439bd5c-19e5-49c8-9c88-5a4a70429a74-whisker-backend-key-pair\") on node \"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 07:12:22.687032 kubelet[2615]: I0813 07:12:22.684097 2615 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b439bd5c-19e5-49c8-9c88-5a4a70429a74-whisker-ca-bundle\") on node \"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 07:12:22.687032 kubelet[2615]: I0813 07:12:22.684118 2615 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cnq2d\" (UniqueName: \"kubernetes.io/projected/b439bd5c-19e5-49c8-9c88-5a4a70429a74-kube-api-access-cnq2d\") on node \"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 07:12:22.789243 systemd[1]: Created slice kubepods-besteffort-pode096c8cf_bdff_44ac_827d_3d20f359032e.slice - libcontainer container kubepods-besteffort-pode096c8cf_bdff_44ac_827d_3d20f359032e.slice. Aug 13 07:12:22.798863 systemd[1]: run-netns-cni\x2d88abff72\x2db179\x2d1a6e\x2d483a\x2d6a09de88332f.mount: Deactivated successfully. Aug 13 07:12:22.799009 systemd[1]: var-lib-kubelet-pods-b439bd5c\x2d19e5\x2d49c8\x2d9c88\x2d5a4a70429a74-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcnq2d.mount: Deactivated successfully. Aug 13 07:12:22.799118 systemd[1]: var-lib-kubelet-pods-b439bd5c\x2d19e5\x2d49c8\x2d9c88\x2d5a4a70429a74-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:12:22.885651 kubelet[2615]: I0813 07:12:22.885454 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e096c8cf-bdff-44ac-827d-3d20f359032e-whisker-backend-key-pair\") pod \"whisker-5bcf7c68b-lp8td\" (UID: \"e096c8cf-bdff-44ac-827d-3d20f359032e\") " pod="calico-system/whisker-5bcf7c68b-lp8td" Aug 13 07:12:22.885651 kubelet[2615]: I0813 07:12:22.885527 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mchn6\" (UniqueName: \"kubernetes.io/projected/e096c8cf-bdff-44ac-827d-3d20f359032e-kube-api-access-mchn6\") pod \"whisker-5bcf7c68b-lp8td\" (UID: \"e096c8cf-bdff-44ac-827d-3d20f359032e\") " pod="calico-system/whisker-5bcf7c68b-lp8td" Aug 13 07:12:22.885651 kubelet[2615]: I0813 07:12:22.885581 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e096c8cf-bdff-44ac-827d-3d20f359032e-whisker-ca-bundle\") pod \"whisker-5bcf7c68b-lp8td\" (UID: \"e096c8cf-bdff-44ac-827d-3d20f359032e\") " pod="calico-system/whisker-5bcf7c68b-lp8td" Aug 13 07:12:23.109112 containerd[1466]: time="2025-08-13T07:12:23.109049160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bcf7c68b-lp8td,Uid:e096c8cf-bdff-44ac-827d-3d20f359032e,Namespace:calico-system,Attempt:0,}" Aug 13 07:12:23.260218 systemd-networkd[1375]: calibf8c2108990: Link UP Aug 13 07:12:23.262582 systemd-networkd[1375]: calibf8c2108990: Gained carrier Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.158 [INFO][3842] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.170 [INFO][3842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--5bcf7c68b--lp8td-eth0 whisker-5bcf7c68b- calico-system e096c8cf-bdff-44ac-827d-3d20f359032e 900 0 2025-08-13 07:12:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5bcf7c68b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal whisker-5bcf7c68b-lp8td eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibf8c2108990 [] [] }} ContainerID="d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" Namespace="calico-system" Pod="whisker-5bcf7c68b-lp8td" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--5bcf7c68b--lp8td-" Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.170 [INFO][3842] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" Namespace="calico-system" Pod="whisker-5bcf7c68b-lp8td" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--5bcf7c68b--lp8td-eth0" Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.204 [INFO][3853] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" HandleID="k8s-pod-network.d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--5bcf7c68b--lp8td-eth0" Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.204 [INFO][3853] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" HandleID="k8s-pod-network.d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--5bcf7c68b--lp8td-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f210), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", "pod":"whisker-5bcf7c68b-lp8td", "timestamp":"2025-08-13 07:12:23.204423358 +0000 UTC"}, Hostname:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.204 [INFO][3853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.204 [INFO][3853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.204 [INFO][3853] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal' Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.214 [INFO][3853] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.220 [INFO][3853] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.227 [INFO][3853] ipam/ipam.go 511: Trying affinity for 192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.229 [INFO][3853] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.232 [INFO][3853] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.232 [INFO][3853] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.234 [INFO][3853] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0 Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.239 [INFO][3853] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.247 [INFO][3853] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.1/26] block=192.168.79.0/26 handle="k8s-pod-network.d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.247 [INFO][3853] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.1/26] handle="k8s-pod-network.d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.247 [INFO][3853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:23.285475 containerd[1466]: 2025-08-13 07:12:23.247 [INFO][3853] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.1/26] IPv6=[] ContainerID="d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" HandleID="k8s-pod-network.d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--5bcf7c68b--lp8td-eth0" Aug 13 07:12:23.287106 containerd[1466]: 2025-08-13 07:12:23.250 [INFO][3842] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" Namespace="calico-system" Pod="whisker-5bcf7c68b-lp8td" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--5bcf7c68b--lp8td-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--5bcf7c68b--lp8td-eth0", GenerateName:"whisker-5bcf7c68b-", Namespace:"calico-system", SelfLink:"", UID:"e096c8cf-bdff-44ac-827d-3d20f359032e", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bcf7c68b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-5bcf7c68b-lp8td", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.79.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibf8c2108990", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:23.287106 containerd[1466]: 2025-08-13 07:12:23.250 [INFO][3842] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.1/32] ContainerID="d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" Namespace="calico-system" Pod="whisker-5bcf7c68b-lp8td" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--5bcf7c68b--lp8td-eth0" Aug 13 07:12:23.287106 containerd[1466]: 2025-08-13 07:12:23.250 [INFO][3842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf8c2108990 ContainerID="d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" Namespace="calico-system" Pod="whisker-5bcf7c68b-lp8td" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--5bcf7c68b--lp8td-eth0" Aug 13 07:12:23.287106 containerd[1466]: 2025-08-13 07:12:23.262 [INFO][3842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" Namespace="calico-system" Pod="whisker-5bcf7c68b-lp8td" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--5bcf7c68b--lp8td-eth0" Aug 13 07:12:23.287106 containerd[1466]: 2025-08-13 07:12:23.262 [INFO][3842] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" Namespace="calico-system" Pod="whisker-5bcf7c68b-lp8td" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--5bcf7c68b--lp8td-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--5bcf7c68b--lp8td-eth0", GenerateName:"whisker-5bcf7c68b-", Namespace:"calico-system", SelfLink:"", UID:"e096c8cf-bdff-44ac-827d-3d20f359032e", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bcf7c68b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0", Pod:"whisker-5bcf7c68b-lp8td", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.79.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibf8c2108990", MAC:"aa:c9:90:9a:91:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:23.287106 containerd[1466]: 2025-08-13 07:12:23.280 [INFO][3842] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0" Namespace="calico-system" Pod="whisker-5bcf7c68b-lp8td" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--5bcf7c68b--lp8td-eth0" Aug 13 07:12:23.313116 containerd[1466]: time="2025-08-13T07:12:23.312922704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:23.313116 containerd[1466]: time="2025-08-13T07:12:23.313032410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:23.313116 containerd[1466]: time="2025-08-13T07:12:23.313060406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:23.315307 containerd[1466]: time="2025-08-13T07:12:23.314400249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:23.345432 systemd[1]: Started cri-containerd-d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0.scope - libcontainer container d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0. Aug 13 07:12:23.404115 containerd[1466]: time="2025-08-13T07:12:23.404039077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bcf7c68b-lp8td,Uid:e096c8cf-bdff-44ac-827d-3d20f359032e,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0\"" Aug 13 07:12:23.407827 containerd[1466]: time="2025-08-13T07:12:23.407791566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:12:24.288249 kernel: bpftool[4010]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:12:24.327119 kubelet[2615]: I0813 07:12:24.327066 2615 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b439bd5c-19e5-49c8-9c88-5a4a70429a74" path="/var/lib/kubelet/pods/b439bd5c-19e5-49c8-9c88-5a4a70429a74/volumes" Aug 13 07:12:24.607414 systemd-networkd[1375]: vxlan.calico: Link UP Aug 13 07:12:24.607427 systemd-networkd[1375]: vxlan.calico: Gained carrier Aug 13 07:12:24.745316 systemd-networkd[1375]: calibf8c2108990: Gained IPv6LL Aug 13 07:12:25.326089 containerd[1466]: time="2025-08-13T07:12:25.325218971Z" level=info msg="StopPodSandbox for \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\"" Aug 13 07:12:25.326692 containerd[1466]: time="2025-08-13T07:12:25.326577211Z" level=info msg="StopPodSandbox for \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\"" Aug 13 07:12:25.521152 containerd[1466]: time="2025-08-13T07:12:25.520284755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:25.523962 containerd[1466]: time="2025-08-13T07:12:25.523894504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:12:25.526207 containerd[1466]: time="2025-08-13T07:12:25.525263622Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:25.534559 containerd[1466]: time="2025-08-13T07:12:25.534501168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:25.539565 containerd[1466]: time="2025-08-13T07:12:25.539498988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.131571355s" Aug 13 07:12:25.539565 containerd[1466]: time="2025-08-13T07:12:25.539570823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:12:25.549405 containerd[1466]: time="2025-08-13T07:12:25.549328188Z" level=info msg="CreateContainer within sandbox \"d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:12:25.593714 containerd[1466]: 2025-08-13 07:12:25.452 [INFO][4122] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Aug 13 07:12:25.593714 containerd[1466]: 2025-08-13 07:12:25.453 [INFO][4122] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" iface="eth0" netns="/var/run/netns/cni-5f03ab4b-dd73-1c7a-fb9e-baf2194de481" Aug 13 07:12:25.593714 containerd[1466]: 2025-08-13 07:12:25.455 [INFO][4122] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" iface="eth0" netns="/var/run/netns/cni-5f03ab4b-dd73-1c7a-fb9e-baf2194de481" Aug 13 07:12:25.593714 containerd[1466]: 2025-08-13 07:12:25.456 [INFO][4122] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" iface="eth0" netns="/var/run/netns/cni-5f03ab4b-dd73-1c7a-fb9e-baf2194de481" Aug 13 07:12:25.593714 containerd[1466]: 2025-08-13 07:12:25.456 [INFO][4122] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Aug 13 07:12:25.593714 containerd[1466]: 2025-08-13 07:12:25.456 [INFO][4122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Aug 13 07:12:25.593714 containerd[1466]: 2025-08-13 07:12:25.551 [INFO][4134] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" HandleID="k8s-pod-network.3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:25.593714 containerd[1466]: 2025-08-13 07:12:25.551 [INFO][4134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:25.593714 containerd[1466]: 2025-08-13 07:12:25.551 [INFO][4134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:25.593714 containerd[1466]: 2025-08-13 07:12:25.565 [WARNING][4134] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" HandleID="k8s-pod-network.3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:25.593714 containerd[1466]: 2025-08-13 07:12:25.565 [INFO][4134] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" HandleID="k8s-pod-network.3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:25.593714 containerd[1466]: 2025-08-13 07:12:25.568 [INFO][4134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:25.593714 containerd[1466]: 2025-08-13 07:12:25.574 [INFO][4122] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Aug 13 07:12:25.595208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3466701597.mount: Deactivated successfully. Aug 13 07:12:25.600327 containerd[1466]: time="2025-08-13T07:12:25.596570565Z" level=info msg="TearDown network for sandbox \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\" successfully" Aug 13 07:12:25.600327 containerd[1466]: time="2025-08-13T07:12:25.596923239Z" level=info msg="StopPodSandbox for \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\" returns successfully" Aug 13 07:12:25.604840 containerd[1466]: time="2025-08-13T07:12:25.602474082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-phmj8,Uid:68f698fa-855d-4911-b071-8b5d910e948d,Namespace:calico-system,Attempt:1,}" Aug 13 07:12:25.607982 systemd[1]: run-netns-cni\x2d5f03ab4b\x2ddd73\x2d1c7a\x2dfb9e\x2dbaf2194de481.mount: Deactivated successfully. Aug 13 07:12:25.609987 containerd[1466]: time="2025-08-13T07:12:25.609889518Z" level=info msg="CreateContainer within sandbox \"d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"aac4bcd952e09ad24380bb7b0873ff1d742c5d05e5c16243fff3f3e8c8e53240\"" Aug 13 07:12:25.612815 containerd[1466]: time="2025-08-13T07:12:25.610933846Z" level=info msg="StartContainer for \"aac4bcd952e09ad24380bb7b0873ff1d742c5d05e5c16243fff3f3e8c8e53240\"" Aug 13 07:12:25.619912 containerd[1466]: 2025-08-13 07:12:25.492 [INFO][4121] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Aug 13 07:12:25.619912 containerd[1466]: 2025-08-13 07:12:25.494 [INFO][4121] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" iface="eth0" netns="/var/run/netns/cni-057bee6c-2114-03b6-53a8-f17a5337b7e1" Aug 13 07:12:25.619912 containerd[1466]: 2025-08-13 07:12:25.495 [INFO][4121] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" iface="eth0" netns="/var/run/netns/cni-057bee6c-2114-03b6-53a8-f17a5337b7e1" Aug 13 07:12:25.619912 containerd[1466]: 2025-08-13 07:12:25.495 [INFO][4121] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" iface="eth0" netns="/var/run/netns/cni-057bee6c-2114-03b6-53a8-f17a5337b7e1" Aug 13 07:12:25.619912 containerd[1466]: 2025-08-13 07:12:25.495 [INFO][4121] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Aug 13 07:12:25.619912 containerd[1466]: 2025-08-13 07:12:25.495 [INFO][4121] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Aug 13 07:12:25.619912 containerd[1466]: 2025-08-13 07:12:25.561 [INFO][4140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" HandleID="k8s-pod-network.23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:25.619912 containerd[1466]: 2025-08-13 07:12:25.561 [INFO][4140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:25.619912 containerd[1466]: 2025-08-13 07:12:25.569 [INFO][4140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:25.619912 containerd[1466]: 2025-08-13 07:12:25.591 [WARNING][4140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" HandleID="k8s-pod-network.23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:25.619912 containerd[1466]: 2025-08-13 07:12:25.591 [INFO][4140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" HandleID="k8s-pod-network.23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:25.619912 containerd[1466]: 2025-08-13 07:12:25.606 [INFO][4140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:25.619912 containerd[1466]: 2025-08-13 07:12:25.613 [INFO][4121] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Aug 13 07:12:25.623161 containerd[1466]: time="2025-08-13T07:12:25.620782773Z" level=info msg="TearDown network for sandbox \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\" successfully" Aug 13 07:12:25.623161 containerd[1466]: time="2025-08-13T07:12:25.620828613Z" level=info msg="StopPodSandbox for \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\" returns successfully" Aug 13 07:12:25.625122 containerd[1466]: time="2025-08-13T07:12:25.624662548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cmvkg,Uid:824f485a-87b0-420d-9975-44d490a376b1,Namespace:kube-system,Attempt:1,}" Aug 13 07:12:25.627149 systemd[1]: run-netns-cni\x2d057bee6c\x2d2114\x2d03b6\x2d53a8\x2df17a5337b7e1.mount: Deactivated successfully. Aug 13 07:12:25.718791 systemd[1]: Started cri-containerd-aac4bcd952e09ad24380bb7b0873ff1d742c5d05e5c16243fff3f3e8c8e53240.scope - libcontainer container aac4bcd952e09ad24380bb7b0873ff1d742c5d05e5c16243fff3f3e8c8e53240. Aug 13 07:12:25.840651 containerd[1466]: time="2025-08-13T07:12:25.840582748Z" level=info msg="StartContainer for \"aac4bcd952e09ad24380bb7b0873ff1d742c5d05e5c16243fff3f3e8c8e53240\" returns successfully" Aug 13 07:12:25.845903 containerd[1466]: time="2025-08-13T07:12:25.845564179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:12:25.933469 systemd-networkd[1375]: cali35e20f4bfcc: Link UP Aug 13 07:12:25.936017 systemd-networkd[1375]: cali35e20f4bfcc: Gained carrier Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.751 [INFO][4153] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0 csi-node-driver- calico-system 68f698fa-855d-4911-b071-8b5d910e948d 911 0 2025-08-13 07:12:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal csi-node-driver-phmj8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali35e20f4bfcc [] [] }} ContainerID="7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" Namespace="calico-system" Pod="csi-node-driver-phmj8" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-" Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.753 [INFO][4153] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" Namespace="calico-system" Pod="csi-node-driver-phmj8" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.851 [INFO][4198] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" HandleID="k8s-pod-network.7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.852 [INFO][4198] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" HandleID="k8s-pod-network.7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", "pod":"csi-node-driver-phmj8", "timestamp":"2025-08-13 07:12:25.851649646 +0000 UTC"}, Hostname:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.852 [INFO][4198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.852 [INFO][4198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.853 [INFO][4198] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal' Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.868 [INFO][4198] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.878 [INFO][4198] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.889 [INFO][4198] ipam/ipam.go 511: Trying affinity for 192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.894 [INFO][4198] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.899 [INFO][4198] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.899 [INFO][4198] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.902 [INFO][4198] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.909 [INFO][4198] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.919 [INFO][4198] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.2/26] block=192.168.79.0/26 handle="k8s-pod-network.7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.919 [INFO][4198] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.2/26] handle="k8s-pod-network.7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.919 [INFO][4198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:25.975722 containerd[1466]: 2025-08-13 07:12:25.920 [INFO][4198] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.2/26] IPv6=[] ContainerID="7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" HandleID="k8s-pod-network.7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:25.976965 containerd[1466]: 2025-08-13 07:12:25.924 [INFO][4153] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" Namespace="calico-system" Pod="csi-node-driver-phmj8" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68f698fa-855d-4911-b071-8b5d910e948d", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-phmj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali35e20f4bfcc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:25.976965 containerd[1466]: 2025-08-13 07:12:25.927 [INFO][4153] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.2/32] ContainerID="7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" Namespace="calico-system" Pod="csi-node-driver-phmj8" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:25.976965 containerd[1466]: 2025-08-13 07:12:25.927 [INFO][4153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35e20f4bfcc ContainerID="7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" Namespace="calico-system" Pod="csi-node-driver-phmj8" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:25.976965 containerd[1466]: 2025-08-13 07:12:25.935 [INFO][4153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" Namespace="calico-system" Pod="csi-node-driver-phmj8" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:25.976965 containerd[1466]: 2025-08-13 07:12:25.937 [INFO][4153] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" Namespace="calico-system" Pod="csi-node-driver-phmj8" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68f698fa-855d-4911-b071-8b5d910e948d", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf", Pod:"csi-node-driver-phmj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali35e20f4bfcc", MAC:"ca:c1:6e:8c:48:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:25.976965 containerd[1466]: 2025-08-13 07:12:25.960 [INFO][4153] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf" Namespace="calico-system" Pod="csi-node-driver-phmj8" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:26.025615 containerd[1466]: time="2025-08-13T07:12:26.025036405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:26.025615 containerd[1466]: time="2025-08-13T07:12:26.025211905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:26.025615 containerd[1466]: time="2025-08-13T07:12:26.025283062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:26.027111 containerd[1466]: time="2025-08-13T07:12:26.025936910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:26.061786 systemd[1]: Started cri-containerd-7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf.scope - libcontainer container 7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf. Aug 13 07:12:26.068478 systemd-networkd[1375]: calie77b8d90e6e: Link UP Aug 13 07:12:26.073364 systemd-networkd[1375]: calie77b8d90e6e: Gained carrier Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:25.789 [INFO][4175] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0 coredns-674b8bbfcf- kube-system 824f485a-87b0-420d-9975-44d490a376b1 912 0 2025-08-13 07:11:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal coredns-674b8bbfcf-cmvkg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie77b8d90e6e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmvkg" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-" Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:25.790 [INFO][4175] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmvkg" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:25.895 [INFO][4204] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" HandleID="k8s-pod-network.cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:25.897 [INFO][4204] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" HandleID="k8s-pod-network.cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000338e70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", "pod":"coredns-674b8bbfcf-cmvkg", "timestamp":"2025-08-13 07:12:25.895923252 +0000 UTC"}, Hostname:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:25.897 [INFO][4204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:25.921 [INFO][4204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:25.921 [INFO][4204] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal' Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:25.973 [INFO][4204] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:25.992 [INFO][4204] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:26.009 [INFO][4204] ipam/ipam.go 511: Trying affinity for 192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:26.015 [INFO][4204] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:26.021 [INFO][4204] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:26.022 [INFO][4204] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:26.028 [INFO][4204] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00 Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:26.040 [INFO][4204] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:26.054 [INFO][4204] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.3/26] block=192.168.79.0/26 handle="k8s-pod-network.cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:26.054 [INFO][4204] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.3/26] handle="k8s-pod-network.cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:26.054 [INFO][4204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:26.107210 containerd[1466]: 2025-08-13 07:12:26.054 [INFO][4204] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.3/26] IPv6=[] ContainerID="cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" HandleID="k8s-pod-network.cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:26.108477 containerd[1466]: 2025-08-13 07:12:26.060 [INFO][4175] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmvkg" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"824f485a-87b0-420d-9975-44d490a376b1", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-674b8bbfcf-cmvkg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie77b8d90e6e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:26.108477 containerd[1466]: 2025-08-13 07:12:26.060 [INFO][4175] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.3/32] ContainerID="cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmvkg" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:26.108477 containerd[1466]: 2025-08-13 07:12:26.060 [INFO][4175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie77b8d90e6e ContainerID="cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmvkg" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:26.108477 containerd[1466]: 2025-08-13 07:12:26.075 [INFO][4175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmvkg" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:26.108477 containerd[1466]: 2025-08-13 07:12:26.075 [INFO][4175] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmvkg" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"824f485a-87b0-420d-9975-44d490a376b1", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00", Pod:"coredns-674b8bbfcf-cmvkg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie77b8d90e6e", MAC:"b2:f2:bb:da:88:c0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:26.108477 containerd[1466]: 2025-08-13 07:12:26.098 [INFO][4175] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmvkg" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:26.157955 containerd[1466]: time="2025-08-13T07:12:26.157557712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:26.157955 containerd[1466]: time="2025-08-13T07:12:26.157649741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:26.157955 containerd[1466]: time="2025-08-13T07:12:26.157678671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:26.157955 containerd[1466]: time="2025-08-13T07:12:26.157801662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:26.181027 containerd[1466]: time="2025-08-13T07:12:26.180139112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-phmj8,Uid:68f698fa-855d-4911-b071-8b5d910e948d,Namespace:calico-system,Attempt:1,} returns sandbox id \"7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf\"" Aug 13 07:12:26.204441 systemd[1]: Started cri-containerd-cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00.scope - libcontainer container cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00. Aug 13 07:12:26.266823 containerd[1466]: time="2025-08-13T07:12:26.266773952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cmvkg,Uid:824f485a-87b0-420d-9975-44d490a376b1,Namespace:kube-system,Attempt:1,} returns sandbox id \"cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00\"" Aug 13 07:12:26.275510 containerd[1466]: time="2025-08-13T07:12:26.275453887Z" level=info msg="CreateContainer within sandbox \"cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:12:26.280516 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL Aug 13 07:12:26.291727 containerd[1466]: time="2025-08-13T07:12:26.291591948Z" level=info msg="CreateContainer within sandbox \"cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22555897b044f5cf3e97efd9ebdd4c7e8b1a31f78c5e13c823645d5f927ade00\"" Aug 13 07:12:26.292664 containerd[1466]: time="2025-08-13T07:12:26.292455743Z" level=info msg="StartContainer for \"22555897b044f5cf3e97efd9ebdd4c7e8b1a31f78c5e13c823645d5f927ade00\"" Aug 13 07:12:26.328622 containerd[1466]: time="2025-08-13T07:12:26.328420140Z" level=info msg="StopPodSandbox for \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\"" Aug 13 07:12:26.335520 systemd[1]: Started cri-containerd-22555897b044f5cf3e97efd9ebdd4c7e8b1a31f78c5e13c823645d5f927ade00.scope - libcontainer container 22555897b044f5cf3e97efd9ebdd4c7e8b1a31f78c5e13c823645d5f927ade00. Aug 13 07:12:26.395418 containerd[1466]: time="2025-08-13T07:12:26.395227385Z" level=info msg="StartContainer for \"22555897b044f5cf3e97efd9ebdd4c7e8b1a31f78c5e13c823645d5f927ade00\" returns successfully" Aug 13 07:12:26.515000 containerd[1466]: 2025-08-13 07:12:26.445 [INFO][4359] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Aug 13 07:12:26.515000 containerd[1466]: 2025-08-13 07:12:26.445 [INFO][4359] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" iface="eth0" netns="/var/run/netns/cni-31da00cd-c970-ba00-74b9-5f5ab39c6768" Aug 13 07:12:26.515000 containerd[1466]: 2025-08-13 07:12:26.451 [INFO][4359] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" iface="eth0" netns="/var/run/netns/cni-31da00cd-c970-ba00-74b9-5f5ab39c6768" Aug 13 07:12:26.515000 containerd[1466]: 2025-08-13 07:12:26.452 [INFO][4359] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" iface="eth0" netns="/var/run/netns/cni-31da00cd-c970-ba00-74b9-5f5ab39c6768" Aug 13 07:12:26.515000 containerd[1466]: 2025-08-13 07:12:26.453 [INFO][4359] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Aug 13 07:12:26.515000 containerd[1466]: 2025-08-13 07:12:26.453 [INFO][4359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Aug 13 07:12:26.515000 containerd[1466]: 2025-08-13 07:12:26.498 [INFO][4377] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" HandleID="k8s-pod-network.598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:26.515000 containerd[1466]: 2025-08-13 07:12:26.499 [INFO][4377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:26.515000 containerd[1466]: 2025-08-13 07:12:26.499 [INFO][4377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:26.515000 containerd[1466]: 2025-08-13 07:12:26.509 [WARNING][4377] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" HandleID="k8s-pod-network.598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:26.515000 containerd[1466]: 2025-08-13 07:12:26.510 [INFO][4377] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" HandleID="k8s-pod-network.598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:26.515000 containerd[1466]: 2025-08-13 07:12:26.511 [INFO][4377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:26.515000 containerd[1466]: 2025-08-13 07:12:26.513 [INFO][4359] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Aug 13 07:12:26.516011 containerd[1466]: time="2025-08-13T07:12:26.515203912Z" level=info msg="TearDown network for sandbox \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\" successfully" Aug 13 07:12:26.516011 containerd[1466]: time="2025-08-13T07:12:26.515243954Z" level=info msg="StopPodSandbox for \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\" returns successfully" Aug 13 07:12:26.516478 containerd[1466]: time="2025-08-13T07:12:26.516437115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d489544c6-rtpdd,Uid:9dd82a74-cb79-405c-9c40-0ccdbc701a0f,Namespace:calico-system,Attempt:1,}" Aug 13 07:12:26.598657 systemd[1]: run-netns-cni\x2d31da00cd\x2dc970\x2dba00\x2d74b9\x2d5f5ab39c6768.mount: Deactivated successfully. Aug 13 07:12:26.742609 systemd-networkd[1375]: cali99ae51b5a8f: Link UP Aug 13 07:12:26.746407 systemd-networkd[1375]: cali99ae51b5a8f: Gained carrier Aug 13 07:12:26.763199 kubelet[2615]: I0813 07:12:26.762214 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cmvkg" podStartSLOduration=39.762147978 podStartE2EDuration="39.762147978s" podCreationTimestamp="2025-08-13 07:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:12:26.718090718 +0000 UTC m=+44.708994334" watchObservedRunningTime="2025-08-13 07:12:26.762147978 +0000 UTC m=+44.753051595" Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.589 [INFO][4385] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0 calico-kube-controllers-d489544c6- calico-system 9dd82a74-cb79-405c-9c40-0ccdbc701a0f 930 0 2025-08-13 07:12:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d489544c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal calico-kube-controllers-d489544c6-rtpdd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali99ae51b5a8f [] [] }} ContainerID="02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" Namespace="calico-system" Pod="calico-kube-controllers-d489544c6-rtpdd" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-" Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.589 [INFO][4385] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" Namespace="calico-system" Pod="calico-kube-controllers-d489544c6-rtpdd" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.646 [INFO][4398] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" HandleID="k8s-pod-network.02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.647 [INFO][4398] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" HandleID="k8s-pod-network.02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", "pod":"calico-kube-controllers-d489544c6-rtpdd", "timestamp":"2025-08-13 07:12:26.646719716 +0000 UTC"}, Hostname:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.647 [INFO][4398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.647 [INFO][4398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.647 [INFO][4398] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal' Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.661 [INFO][4398] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.671 [INFO][4398] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.682 [INFO][4398] ipam/ipam.go 511: Trying affinity for 192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.686 [INFO][4398] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.693 [INFO][4398] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.693 [INFO][4398] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.699 [INFO][4398] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.706 [INFO][4398] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.723 [INFO][4398] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.4/26] block=192.168.79.0/26 handle="k8s-pod-network.02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.723 [INFO][4398] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.4/26] handle="k8s-pod-network.02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.723 [INFO][4398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:26.817206 containerd[1466]: 2025-08-13 07:12:26.723 [INFO][4398] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.4/26] IPv6=[] ContainerID="02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" HandleID="k8s-pod-network.02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:26.820366 containerd[1466]: 2025-08-13 07:12:26.729 [INFO][4385] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" Namespace="calico-system" Pod="calico-kube-controllers-d489544c6-rtpdd" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0", GenerateName:"calico-kube-controllers-d489544c6-", Namespace:"calico-system", SelfLink:"", UID:"9dd82a74-cb79-405c-9c40-0ccdbc701a0f", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d489544c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-d489544c6-rtpdd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99ae51b5a8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:26.820366 containerd[1466]: 2025-08-13 07:12:26.729 [INFO][4385] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.4/32] ContainerID="02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" Namespace="calico-system" Pod="calico-kube-controllers-d489544c6-rtpdd" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:26.820366 containerd[1466]: 2025-08-13 07:12:26.729 [INFO][4385] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99ae51b5a8f ContainerID="02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" Namespace="calico-system" Pod="calico-kube-controllers-d489544c6-rtpdd" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:26.820366 containerd[1466]: 2025-08-13 07:12:26.744 [INFO][4385] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" Namespace="calico-system" Pod="calico-kube-controllers-d489544c6-rtpdd" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:26.820366 containerd[1466]: 2025-08-13 07:12:26.750 [INFO][4385] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" Namespace="calico-system" Pod="calico-kube-controllers-d489544c6-rtpdd" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0", GenerateName:"calico-kube-controllers-d489544c6-", Namespace:"calico-system", SelfLink:"", UID:"9dd82a74-cb79-405c-9c40-0ccdbc701a0f", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d489544c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f", Pod:"calico-kube-controllers-d489544c6-rtpdd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99ae51b5a8f", MAC:"0a:00:37:d0:ca:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:26.820366 containerd[1466]: 2025-08-13 07:12:26.808 [INFO][4385] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f" Namespace="calico-system" Pod="calico-kube-controllers-d489544c6-rtpdd" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:26.902832 containerd[1466]: time="2025-08-13T07:12:26.902653666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:26.902832 containerd[1466]: time="2025-08-13T07:12:26.902760034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:26.902832 containerd[1466]: time="2025-08-13T07:12:26.902786449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:26.903210 containerd[1466]: time="2025-08-13T07:12:26.902965059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:26.974051 systemd[1]: Started cri-containerd-02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f.scope - libcontainer container 02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f. Aug 13 07:12:27.111392 systemd-networkd[1375]: cali35e20f4bfcc: Gained IPv6LL Aug 13 07:12:27.176319 systemd-networkd[1375]: calie77b8d90e6e: Gained IPv6LL Aug 13 07:12:27.220935 containerd[1466]: time="2025-08-13T07:12:27.220830764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d489544c6-rtpdd,Uid:9dd82a74-cb79-405c-9c40-0ccdbc701a0f,Namespace:calico-system,Attempt:1,} returns sandbox id \"02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f\"" Aug 13 07:12:27.326733 containerd[1466]: time="2025-08-13T07:12:27.326258436Z" level=info msg="StopPodSandbox for \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\"" Aug 13 07:12:27.536119 containerd[1466]: 2025-08-13 07:12:27.445 [INFO][4471] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Aug 13 07:12:27.536119 containerd[1466]: 2025-08-13 07:12:27.446 [INFO][4471] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" iface="eth0" netns="/var/run/netns/cni-5d3270a3-3f4c-0e60-73a8-5aa922ed2c36" Aug 13 07:12:27.536119 containerd[1466]: 2025-08-13 07:12:27.446 [INFO][4471] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" iface="eth0" netns="/var/run/netns/cni-5d3270a3-3f4c-0e60-73a8-5aa922ed2c36" Aug 13 07:12:27.536119 containerd[1466]: 2025-08-13 07:12:27.447 [INFO][4471] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" iface="eth0" netns="/var/run/netns/cni-5d3270a3-3f4c-0e60-73a8-5aa922ed2c36" Aug 13 07:12:27.536119 containerd[1466]: 2025-08-13 07:12:27.447 [INFO][4471] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Aug 13 07:12:27.536119 containerd[1466]: 2025-08-13 07:12:27.447 [INFO][4471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Aug 13 07:12:27.536119 containerd[1466]: 2025-08-13 07:12:27.510 [INFO][4479] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" HandleID="k8s-pod-network.5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:27.536119 containerd[1466]: 2025-08-13 07:12:27.511 [INFO][4479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:27.536119 containerd[1466]: 2025-08-13 07:12:27.511 [INFO][4479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:27.536119 containerd[1466]: 2025-08-13 07:12:27.522 [WARNING][4479] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" HandleID="k8s-pod-network.5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:27.536119 containerd[1466]: 2025-08-13 07:12:27.522 [INFO][4479] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" HandleID="k8s-pod-network.5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:27.536119 containerd[1466]: 2025-08-13 07:12:27.526 [INFO][4479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:27.536119 containerd[1466]: 2025-08-13 07:12:27.529 [INFO][4471] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Aug 13 07:12:27.542197 containerd[1466]: time="2025-08-13T07:12:27.540272141Z" level=info msg="TearDown network for sandbox \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\" successfully" Aug 13 07:12:27.542197 containerd[1466]: time="2025-08-13T07:12:27.540322455Z" level=info msg="StopPodSandbox for \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\" returns successfully" Aug 13 07:12:27.546989 containerd[1466]: time="2025-08-13T07:12:27.546872836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7549bfdd56-dfv7p,Uid:119aced9-7cb0-4d80-8364-7168248c339c,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:12:27.550635 systemd[1]: run-netns-cni\x2d5d3270a3\x2d3f4c\x2d0e60\x2d73a8\x2d5aa922ed2c36.mount: Deactivated successfully. Aug 13 07:12:27.836157 systemd-networkd[1375]: cali0d2403214e9: Link UP Aug 13 07:12:27.836561 systemd-networkd[1375]: cali0d2403214e9: Gained carrier Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.666 [INFO][4485] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0 calico-apiserver-7549bfdd56- calico-apiserver 119aced9-7cb0-4d80-8364-7168248c339c 945 0 2025-08-13 07:11:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7549bfdd56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal calico-apiserver-7549bfdd56-dfv7p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0d2403214e9 [] [] }} ContainerID="887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-dfv7p" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-" Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.666 [INFO][4485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-dfv7p" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.745 [INFO][4500] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" HandleID="k8s-pod-network.887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.747 [INFO][4500] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" HandleID="k8s-pod-network.887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ba60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", "pod":"calico-apiserver-7549bfdd56-dfv7p", "timestamp":"2025-08-13 07:12:27.745304964 +0000 UTC"}, Hostname:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.747 [INFO][4500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.747 [INFO][4500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.747 [INFO][4500] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal' Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.763 [INFO][4500] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.772 [INFO][4500] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.781 [INFO][4500] ipam/ipam.go 511: Trying affinity for 192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.787 [INFO][4500] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.793 [INFO][4500] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.793 [INFO][4500] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.797 [INFO][4500] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048 Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.806 [INFO][4500] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.823 [INFO][4500] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.5/26] block=192.168.79.0/26 handle="k8s-pod-network.887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.823 [INFO][4500] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.5/26] handle="k8s-pod-network.887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.823 [INFO][4500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:27.869413 containerd[1466]: 2025-08-13 07:12:27.823 [INFO][4500] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.5/26] IPv6=[] ContainerID="887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" HandleID="k8s-pod-network.887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:27.872441 containerd[1466]: 2025-08-13 07:12:27.828 [INFO][4485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-dfv7p" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0", GenerateName:"calico-apiserver-7549bfdd56-", Namespace:"calico-apiserver", SelfLink:"", UID:"119aced9-7cb0-4d80-8364-7168248c339c", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7549bfdd56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7549bfdd56-dfv7p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d2403214e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:27.872441 containerd[1466]: 2025-08-13 07:12:27.829 [INFO][4485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.5/32] ContainerID="887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-dfv7p" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:27.872441 containerd[1466]: 2025-08-13 07:12:27.829 [INFO][4485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d2403214e9 ContainerID="887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-dfv7p" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:27.872441 containerd[1466]: 2025-08-13 07:12:27.836 [INFO][4485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-dfv7p" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:27.872441 containerd[1466]: 2025-08-13 07:12:27.837 [INFO][4485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-dfv7p" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0", GenerateName:"calico-apiserver-7549bfdd56-", Namespace:"calico-apiserver", SelfLink:"", UID:"119aced9-7cb0-4d80-8364-7168248c339c", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7549bfdd56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048", Pod:"calico-apiserver-7549bfdd56-dfv7p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d2403214e9", MAC:"8a:54:37:e8:32:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:27.872441 containerd[1466]: 2025-08-13 07:12:27.861 [INFO][4485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-dfv7p" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:27.938577 containerd[1466]: time="2025-08-13T07:12:27.938290406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:27.938577 containerd[1466]: time="2025-08-13T07:12:27.938358634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:27.938577 containerd[1466]: time="2025-08-13T07:12:27.938378297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:27.939665 containerd[1466]: time="2025-08-13T07:12:27.938491990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:28.020499 systemd[1]: Started cri-containerd-887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048.scope - libcontainer container 887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048. Aug 13 07:12:28.115310 containerd[1466]: time="2025-08-13T07:12:28.115131344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7549bfdd56-dfv7p,Uid:119aced9-7cb0-4d80-8364-7168248c339c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048\"" Aug 13 07:12:28.328455 systemd-networkd[1375]: cali99ae51b5a8f: Gained IPv6LL Aug 13 07:12:28.521952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3158104466.mount: Deactivated successfully. Aug 13 07:12:28.539090 containerd[1466]: time="2025-08-13T07:12:28.539008851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:28.540533 containerd[1466]: time="2025-08-13T07:12:28.540462336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:12:28.541895 containerd[1466]: time="2025-08-13T07:12:28.541751474Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:28.545432 containerd[1466]: time="2025-08-13T07:12:28.545384609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:28.546845 containerd[1466]: time="2025-08-13T07:12:28.546605407Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.700873024s" Aug 13 07:12:28.546845 containerd[1466]: time="2025-08-13T07:12:28.546655875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:12:28.548555 containerd[1466]: time="2025-08-13T07:12:28.548525619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:12:28.554762 containerd[1466]: time="2025-08-13T07:12:28.554702951Z" level=info msg="CreateContainer within sandbox \"d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:12:28.573922 containerd[1466]: time="2025-08-13T07:12:28.572947539Z" level=info msg="CreateContainer within sandbox \"d3e1cde4266170f0a5fad8f19e0055d7327204f74f2e06f1becbb4a95fb1f4f0\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"67c4ddf35c73c97b7b3b452cae3a9be694f1c7ed6c44c4bf6d5e7604122210b1\"" Aug 13 07:12:28.576006 containerd[1466]: time="2025-08-13T07:12:28.574323309Z" level=info msg="StartContainer for \"67c4ddf35c73c97b7b3b452cae3a9be694f1c7ed6c44c4bf6d5e7604122210b1\"" Aug 13 07:12:28.635427 systemd[1]: Started cri-containerd-67c4ddf35c73c97b7b3b452cae3a9be694f1c7ed6c44c4bf6d5e7604122210b1.scope - libcontainer container 67c4ddf35c73c97b7b3b452cae3a9be694f1c7ed6c44c4bf6d5e7604122210b1. Aug 13 07:12:28.709213 containerd[1466]: time="2025-08-13T07:12:28.708104046Z" level=info msg="StartContainer for \"67c4ddf35c73c97b7b3b452cae3a9be694f1c7ed6c44c4bf6d5e7604122210b1\" returns successfully" Aug 13 07:12:29.095546 systemd-networkd[1375]: cali0d2403214e9: Gained IPv6LL Aug 13 07:12:30.204895 containerd[1466]: time="2025-08-13T07:12:30.204813718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:30.206139 containerd[1466]: time="2025-08-13T07:12:30.206064230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:12:30.207563 containerd[1466]: time="2025-08-13T07:12:30.207475710Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:30.210579 containerd[1466]: time="2025-08-13T07:12:30.210493710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:30.211858 containerd[1466]: time="2025-08-13T07:12:30.211593609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.662873741s" Aug 13 07:12:30.211858 containerd[1466]: time="2025-08-13T07:12:30.211653488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:12:30.213851 containerd[1466]: time="2025-08-13T07:12:30.213388622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:12:30.218121 containerd[1466]: time="2025-08-13T07:12:30.218051270Z" level=info msg="CreateContainer within sandbox \"7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:12:30.246264 containerd[1466]: time="2025-08-13T07:12:30.245827104Z" level=info msg="CreateContainer within sandbox \"7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"11801389a59312e88202209666f8012cd460ea94b753b04ac525b91ccf426de1\"" Aug 13 07:12:30.248562 containerd[1466]: time="2025-08-13T07:12:30.247585578Z" level=info msg="StartContainer for \"11801389a59312e88202209666f8012cd460ea94b753b04ac525b91ccf426de1\"" Aug 13 07:12:30.306780 systemd[1]: run-containerd-runc-k8s.io-11801389a59312e88202209666f8012cd460ea94b753b04ac525b91ccf426de1-runc.TEN6r8.mount: Deactivated successfully. Aug 13 07:12:30.319484 systemd[1]: Started cri-containerd-11801389a59312e88202209666f8012cd460ea94b753b04ac525b91ccf426de1.scope - libcontainer container 11801389a59312e88202209666f8012cd460ea94b753b04ac525b91ccf426de1. Aug 13 07:12:30.328281 containerd[1466]: time="2025-08-13T07:12:30.327745555Z" level=info msg="StopPodSandbox for \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\"" Aug 13 07:12:30.330506 containerd[1466]: time="2025-08-13T07:12:30.330059939Z" level=info msg="StopPodSandbox for \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\"" Aug 13 07:12:30.450775 kubelet[2615]: I0813 07:12:30.450685 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5bcf7c68b-lp8td" podStartSLOduration=3.308648717 podStartE2EDuration="8.45064634s" podCreationTimestamp="2025-08-13 07:12:22 +0000 UTC" firstStartedPulling="2025-08-13 07:12:23.406363431 +0000 UTC m=+41.397267023" lastFinishedPulling="2025-08-13 07:12:28.548361051 +0000 UTC m=+46.539264646" observedRunningTime="2025-08-13 07:12:28.753913889 +0000 UTC m=+46.744817507" watchObservedRunningTime="2025-08-13 07:12:30.45064634 +0000 UTC m=+48.441549958" Aug 13 07:12:30.466594 containerd[1466]: time="2025-08-13T07:12:30.465515761Z" level=info msg="StartContainer for \"11801389a59312e88202209666f8012cd460ea94b753b04ac525b91ccf426de1\" returns successfully" Aug 13 07:12:30.542242 containerd[1466]: 2025-08-13 07:12:30.453 [INFO][4654] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Aug 13 07:12:30.542242 containerd[1466]: 2025-08-13 07:12:30.454 [INFO][4654] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" iface="eth0" netns="/var/run/netns/cni-aa7d392a-a760-6ae5-4c46-c491e791e8e1" Aug 13 07:12:30.542242 containerd[1466]: 2025-08-13 07:12:30.454 [INFO][4654] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" iface="eth0" netns="/var/run/netns/cni-aa7d392a-a760-6ae5-4c46-c491e791e8e1" Aug 13 07:12:30.542242 containerd[1466]: 2025-08-13 07:12:30.454 [INFO][4654] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" iface="eth0" netns="/var/run/netns/cni-aa7d392a-a760-6ae5-4c46-c491e791e8e1" Aug 13 07:12:30.542242 containerd[1466]: 2025-08-13 07:12:30.455 [INFO][4654] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Aug 13 07:12:30.542242 containerd[1466]: 2025-08-13 07:12:30.455 [INFO][4654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Aug 13 07:12:30.542242 containerd[1466]: 2025-08-13 07:12:30.521 [INFO][4676] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" HandleID="k8s-pod-network.769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:30.542242 containerd[1466]: 2025-08-13 07:12:30.521 [INFO][4676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:30.542242 containerd[1466]: 2025-08-13 07:12:30.521 [INFO][4676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:30.542242 containerd[1466]: 2025-08-13 07:12:30.533 [WARNING][4676] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" HandleID="k8s-pod-network.769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:30.542242 containerd[1466]: 2025-08-13 07:12:30.533 [INFO][4676] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" HandleID="k8s-pod-network.769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:30.542242 containerd[1466]: 2025-08-13 07:12:30.536 [INFO][4676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:30.542242 containerd[1466]: 2025-08-13 07:12:30.539 [INFO][4654] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Aug 13 07:12:30.546706 containerd[1466]: time="2025-08-13T07:12:30.546511422Z" level=info msg="TearDown network for sandbox \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\" successfully" Aug 13 07:12:30.546706 containerd[1466]: time="2025-08-13T07:12:30.546560239Z" level=info msg="StopPodSandbox for \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\" returns successfully" Aug 13 07:12:30.547663 containerd[1466]: time="2025-08-13T07:12:30.547632010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-82cfm,Uid:89495aa1-ee61-41c8-8f49-33992f3f9e26,Namespace:kube-system,Attempt:1,}" Aug 13 07:12:30.548836 systemd[1]: run-netns-cni\x2daa7d392a\x2da760\x2d6ae5\x2d4c46\x2dc491e791e8e1.mount: Deactivated successfully. Aug 13 07:12:30.566266 containerd[1466]: 2025-08-13 07:12:30.494 [INFO][4655] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Aug 13 07:12:30.566266 containerd[1466]: 2025-08-13 07:12:30.494 [INFO][4655] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" iface="eth0" netns="/var/run/netns/cni-54980430-ae0e-bd37-f03d-62b7163b520f" Aug 13 07:12:30.566266 containerd[1466]: 2025-08-13 07:12:30.495 [INFO][4655] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" iface="eth0" netns="/var/run/netns/cni-54980430-ae0e-bd37-f03d-62b7163b520f" Aug 13 07:12:30.566266 containerd[1466]: 2025-08-13 07:12:30.495 [INFO][4655] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" iface="eth0" netns="/var/run/netns/cni-54980430-ae0e-bd37-f03d-62b7163b520f" Aug 13 07:12:30.566266 containerd[1466]: 2025-08-13 07:12:30.495 [INFO][4655] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Aug 13 07:12:30.566266 containerd[1466]: 2025-08-13 07:12:30.495 [INFO][4655] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Aug 13 07:12:30.566266 containerd[1466]: 2025-08-13 07:12:30.544 [INFO][4681] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" HandleID="k8s-pod-network.3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:30.566266 containerd[1466]: 2025-08-13 07:12:30.544 [INFO][4681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:30.566266 containerd[1466]: 2025-08-13 07:12:30.544 [INFO][4681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:30.566266 containerd[1466]: 2025-08-13 07:12:30.558 [WARNING][4681] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" HandleID="k8s-pod-network.3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:30.566266 containerd[1466]: 2025-08-13 07:12:30.558 [INFO][4681] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" HandleID="k8s-pod-network.3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:30.566266 containerd[1466]: 2025-08-13 07:12:30.560 [INFO][4681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:30.566266 containerd[1466]: 2025-08-13 07:12:30.563 [INFO][4655] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Aug 13 07:12:30.567039 containerd[1466]: time="2025-08-13T07:12:30.566705032Z" level=info msg="TearDown network for sandbox \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\" successfully" Aug 13 07:12:30.567039 containerd[1466]: time="2025-08-13T07:12:30.566746374Z" level=info msg="StopPodSandbox for \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\" returns successfully" Aug 13 07:12:30.568496 containerd[1466]: time="2025-08-13T07:12:30.568308276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-5mqk4,Uid:1cb2b53d-86e2-4e78-ad45-c6b1da7fe653,Namespace:calico-system,Attempt:1,}" Aug 13 07:12:30.791391 systemd-networkd[1375]: cali141b38d7cae: Link UP Aug 13 07:12:30.797190 systemd-networkd[1375]: cali141b38d7cae: Gained carrier Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.652 [INFO][4689] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0 coredns-674b8bbfcf- kube-system 89495aa1-ee61-41c8-8f49-33992f3f9e26 968 0 2025-08-13 07:11:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal coredns-674b8bbfcf-82cfm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali141b38d7cae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" Namespace="kube-system" Pod="coredns-674b8bbfcf-82cfm" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-" Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.653 [INFO][4689] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" Namespace="kube-system" Pod="coredns-674b8bbfcf-82cfm" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.709 [INFO][4714] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" HandleID="k8s-pod-network.b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.709 [INFO][4714] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" HandleID="k8s-pod-network.b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5660), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", "pod":"coredns-674b8bbfcf-82cfm", "timestamp":"2025-08-13 07:12:30.709059537 +0000 UTC"}, Hostname:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.710 [INFO][4714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.710 [INFO][4714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.710 [INFO][4714] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal' Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.725 [INFO][4714] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.736 [INFO][4714] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.753 [INFO][4714] ipam/ipam.go 511: Trying affinity for 192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.756 [INFO][4714] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.759 [INFO][4714] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.759 [INFO][4714] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.761 [INFO][4714] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896 Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.769 [INFO][4714] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.778 [INFO][4714] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.6/26] block=192.168.79.0/26 handle="k8s-pod-network.b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.778 [INFO][4714] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.6/26] handle="k8s-pod-network.b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.779 [INFO][4714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:30.822999 containerd[1466]: 2025-08-13 07:12:30.779 [INFO][4714] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.6/26] IPv6=[] ContainerID="b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" HandleID="k8s-pod-network.b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:30.824396 containerd[1466]: 2025-08-13 07:12:30.782 [INFO][4689] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" Namespace="kube-system" Pod="coredns-674b8bbfcf-82cfm" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"89495aa1-ee61-41c8-8f49-33992f3f9e26", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-674b8bbfcf-82cfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali141b38d7cae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:30.824396 containerd[1466]: 2025-08-13 07:12:30.783 [INFO][4689] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.6/32] ContainerID="b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" Namespace="kube-system" Pod="coredns-674b8bbfcf-82cfm" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:30.824396 containerd[1466]: 2025-08-13 07:12:30.783 [INFO][4689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali141b38d7cae ContainerID="b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" Namespace="kube-system" Pod="coredns-674b8bbfcf-82cfm" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:30.824396 containerd[1466]: 2025-08-13 07:12:30.800 [INFO][4689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" Namespace="kube-system" Pod="coredns-674b8bbfcf-82cfm" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:30.824396 containerd[1466]: 2025-08-13 07:12:30.801 [INFO][4689] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" Namespace="kube-system" Pod="coredns-674b8bbfcf-82cfm" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"89495aa1-ee61-41c8-8f49-33992f3f9e26", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896", Pod:"coredns-674b8bbfcf-82cfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali141b38d7cae", MAC:"ee:f1:45:19:b7:5f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:30.824396 containerd[1466]: 2025-08-13 07:12:30.819 [INFO][4689] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896" Namespace="kube-system" Pod="coredns-674b8bbfcf-82cfm" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:30.870367 containerd[1466]: time="2025-08-13T07:12:30.870148160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:30.870592 containerd[1466]: time="2025-08-13T07:12:30.870360735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:30.870592 containerd[1466]: time="2025-08-13T07:12:30.870391973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:30.870819 containerd[1466]: time="2025-08-13T07:12:30.870608808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:30.919531 systemd[1]: Started cri-containerd-b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896.scope - libcontainer container b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896. Aug 13 07:12:30.920643 systemd-networkd[1375]: cali5b9a974e7d0: Link UP Aug 13 07:12:30.921016 systemd-networkd[1375]: cali5b9a974e7d0: Gained carrier Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.676 [INFO][4700] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0 goldmane-768f4c5c69- calico-system 1cb2b53d-86e2-4e78-ad45-c6b1da7fe653 972 0 2025-08-13 07:12:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal goldmane-768f4c5c69-5mqk4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5b9a974e7d0 [] [] }} ContainerID="be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" Namespace="calico-system" Pod="goldmane-768f4c5c69-5mqk4" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-" Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.677 [INFO][4700] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" Namespace="calico-system" Pod="goldmane-768f4c5c69-5mqk4" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.744 [INFO][4719] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" HandleID="k8s-pod-network.be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.744 [INFO][4719] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" HandleID="k8s-pod-network.be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fd80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", "pod":"goldmane-768f4c5c69-5mqk4", "timestamp":"2025-08-13 07:12:30.74441206 +0000 UTC"}, Hostname:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.744 [INFO][4719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.779 [INFO][4719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.779 [INFO][4719] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal' Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.825 [INFO][4719] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.840 [INFO][4719] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.853 [INFO][4719] ipam/ipam.go 511: Trying affinity for 192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.856 [INFO][4719] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.863 [INFO][4719] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.863 [INFO][4719] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.867 [INFO][4719] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.884 [INFO][4719] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.900 [INFO][4719] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.7/26] block=192.168.79.0/26 handle="k8s-pod-network.be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.900 [INFO][4719] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.7/26] handle="k8s-pod-network.be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.900 [INFO][4719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:30.953935 containerd[1466]: 2025-08-13 07:12:30.900 [INFO][4719] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.7/26] IPv6=[] ContainerID="be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" HandleID="k8s-pod-network.be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:30.955820 containerd[1466]: 2025-08-13 07:12:30.910 [INFO][4700] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" Namespace="calico-system" Pod="goldmane-768f4c5c69-5mqk4" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1cb2b53d-86e2-4e78-ad45-c6b1da7fe653", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-768f4c5c69-5mqk4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b9a974e7d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:30.955820 containerd[1466]: 2025-08-13 07:12:30.911 [INFO][4700] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.7/32] ContainerID="be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" Namespace="calico-system" Pod="goldmane-768f4c5c69-5mqk4" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:30.955820 containerd[1466]: 2025-08-13 07:12:30.911 [INFO][4700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b9a974e7d0 ContainerID="be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" Namespace="calico-system" Pod="goldmane-768f4c5c69-5mqk4" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:30.955820 containerd[1466]: 2025-08-13 07:12:30.916 [INFO][4700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" Namespace="calico-system" Pod="goldmane-768f4c5c69-5mqk4" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:30.955820 containerd[1466]: 2025-08-13 07:12:30.917 [INFO][4700] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" Namespace="calico-system" Pod="goldmane-768f4c5c69-5mqk4" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1cb2b53d-86e2-4e78-ad45-c6b1da7fe653", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b", Pod:"goldmane-768f4c5c69-5mqk4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b9a974e7d0", MAC:"32:ad:f8:b3:2f:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:30.955820 containerd[1466]: 2025-08-13 07:12:30.947 [INFO][4700] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b" Namespace="calico-system" Pod="goldmane-768f4c5c69-5mqk4" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:31.004197 containerd[1466]: time="2025-08-13T07:12:31.003880380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:31.005238 containerd[1466]: time="2025-08-13T07:12:31.004575289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:31.008118 containerd[1466]: time="2025-08-13T07:12:31.007774630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:31.008118 containerd[1466]: time="2025-08-13T07:12:31.007941983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:31.054442 systemd[1]: Started cri-containerd-be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b.scope - libcontainer container be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b. Aug 13 07:12:31.062786 containerd[1466]: time="2025-08-13T07:12:31.062734541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-82cfm,Uid:89495aa1-ee61-41c8-8f49-33992f3f9e26,Namespace:kube-system,Attempt:1,} returns sandbox id \"b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896\"" Aug 13 07:12:31.071690 containerd[1466]: time="2025-08-13T07:12:31.071627965Z" level=info msg="CreateContainer within sandbox \"b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:12:31.105069 containerd[1466]: time="2025-08-13T07:12:31.104836294Z" level=info msg="CreateContainer within sandbox \"b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f023967c21c7794c2090c18c36e4ea9977d6b6e874edb09f26961737ba949774\"" Aug 13 07:12:31.106332 containerd[1466]: time="2025-08-13T07:12:31.106257216Z" level=info msg="StartContainer for \"f023967c21c7794c2090c18c36e4ea9977d6b6e874edb09f26961737ba949774\"" Aug 13 07:12:31.159755 containerd[1466]: time="2025-08-13T07:12:31.159593902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-5mqk4,Uid:1cb2b53d-86e2-4e78-ad45-c6b1da7fe653,Namespace:calico-system,Attempt:1,} returns sandbox id \"be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b\"" Aug 13 07:12:31.163654 systemd[1]: Started cri-containerd-f023967c21c7794c2090c18c36e4ea9977d6b6e874edb09f26961737ba949774.scope - libcontainer container f023967c21c7794c2090c18c36e4ea9977d6b6e874edb09f26961737ba949774. Aug 13 07:12:31.224739 containerd[1466]: time="2025-08-13T07:12:31.224425848Z" level=info msg="StartContainer for \"f023967c21c7794c2090c18c36e4ea9977d6b6e874edb09f26961737ba949774\" returns successfully" Aug 13 07:12:31.251009 systemd[1]: run-netns-cni\x2d54980430\x2dae0e\x2dbd37\x2df03d\x2d62b7163b520f.mount: Deactivated successfully. Aug 13 07:12:31.327375 containerd[1466]: time="2025-08-13T07:12:31.326698223Z" level=info msg="StopPodSandbox for \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\"" Aug 13 07:12:31.551238 containerd[1466]: 2025-08-13 07:12:31.464 [INFO][4875] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Aug 13 07:12:31.551238 containerd[1466]: 2025-08-13 07:12:31.464 [INFO][4875] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" iface="eth0" netns="/var/run/netns/cni-9fe4bf1b-a20a-c45d-e453-14314d4ffa18" Aug 13 07:12:31.551238 containerd[1466]: 2025-08-13 07:12:31.464 [INFO][4875] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" iface="eth0" netns="/var/run/netns/cni-9fe4bf1b-a20a-c45d-e453-14314d4ffa18" Aug 13 07:12:31.551238 containerd[1466]: 2025-08-13 07:12:31.465 [INFO][4875] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" iface="eth0" netns="/var/run/netns/cni-9fe4bf1b-a20a-c45d-e453-14314d4ffa18" Aug 13 07:12:31.551238 containerd[1466]: 2025-08-13 07:12:31.465 [INFO][4875] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Aug 13 07:12:31.551238 containerd[1466]: 2025-08-13 07:12:31.465 [INFO][4875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Aug 13 07:12:31.551238 containerd[1466]: 2025-08-13 07:12:31.530 [INFO][4882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" HandleID="k8s-pod-network.dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:31.551238 containerd[1466]: 2025-08-13 07:12:31.531 [INFO][4882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:31.551238 containerd[1466]: 2025-08-13 07:12:31.531 [INFO][4882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:31.551238 containerd[1466]: 2025-08-13 07:12:31.543 [WARNING][4882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" HandleID="k8s-pod-network.dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:31.551238 containerd[1466]: 2025-08-13 07:12:31.543 [INFO][4882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" HandleID="k8s-pod-network.dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:31.551238 containerd[1466]: 2025-08-13 07:12:31.546 [INFO][4882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:31.551238 containerd[1466]: 2025-08-13 07:12:31.548 [INFO][4875] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Aug 13 07:12:31.560309 containerd[1466]: time="2025-08-13T07:12:31.560250519Z" level=info msg="TearDown network for sandbox \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\" successfully" Aug 13 07:12:31.560309 containerd[1466]: time="2025-08-13T07:12:31.560304415Z" level=info msg="StopPodSandbox for \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\" returns successfully" Aug 13 07:12:31.561533 systemd[1]: run-netns-cni\x2d9fe4bf1b\x2da20a\x2dc45d\x2de453\x2d14314d4ffa18.mount: Deactivated successfully. Aug 13 07:12:31.562702 containerd[1466]: time="2025-08-13T07:12:31.562357165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7549bfdd56-x7dn2,Uid:714ca8a9-a538-482c-a68a-c0ed3f848627,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:12:31.832454 systemd-networkd[1375]: cali0881a783633: Link UP Aug 13 07:12:31.837420 systemd-networkd[1375]: cali0881a783633: Gained carrier Aug 13 07:12:31.863252 kubelet[2615]: I0813 07:12:31.862759 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-82cfm" podStartSLOduration=44.862729849 podStartE2EDuration="44.862729849s" podCreationTimestamp="2025-08-13 07:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:12:31.783230348 +0000 UTC m=+49.774133966" watchObservedRunningTime="2025-08-13 07:12:31.862729849 +0000 UTC m=+49.853633463" Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.658 [INFO][4889] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0 calico-apiserver-7549bfdd56- calico-apiserver 714ca8a9-a538-482c-a68a-c0ed3f848627 988 0 2025-08-13 07:11:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7549bfdd56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal calico-apiserver-7549bfdd56-x7dn2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0881a783633 [] [] }} ContainerID="6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-x7dn2" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-" Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.658 [INFO][4889] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-x7dn2" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.714 [INFO][4901] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" HandleID="k8s-pod-network.6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.715 [INFO][4901] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" HandleID="k8s-pod-network.6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd0c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", "pod":"calico-apiserver-7549bfdd56-x7dn2", "timestamp":"2025-08-13 07:12:31.714595207 +0000 UTC"}, Hostname:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.715 [INFO][4901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.715 [INFO][4901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.715 [INFO][4901] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal' Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.731 [INFO][4901] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.749 [INFO][4901] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.765 [INFO][4901] ipam/ipam.go 511: Trying affinity for 192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.772 [INFO][4901] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.797 [INFO][4901] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.797 [INFO][4901] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.801 [INFO][4901] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261 Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.807 [INFO][4901] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.820 [INFO][4901] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.8/26] block=192.168.79.0/26 handle="k8s-pod-network.6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.820 [INFO][4901] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.8/26] handle="k8s-pod-network.6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" host="ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal" Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.820 [INFO][4901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:31.866069 containerd[1466]: 2025-08-13 07:12:31.820 [INFO][4901] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.8/26] IPv6=[] ContainerID="6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" HandleID="k8s-pod-network.6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:31.869573 containerd[1466]: 2025-08-13 07:12:31.824 [INFO][4889] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-x7dn2" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0", GenerateName:"calico-apiserver-7549bfdd56-", Namespace:"calico-apiserver", SelfLink:"", UID:"714ca8a9-a538-482c-a68a-c0ed3f848627", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7549bfdd56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7549bfdd56-x7dn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0881a783633", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:31.869573 containerd[1466]: 2025-08-13 07:12:31.824 [INFO][4889] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.8/32] ContainerID="6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-x7dn2" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:31.869573 containerd[1466]: 2025-08-13 07:12:31.825 [INFO][4889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0881a783633 ContainerID="6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-x7dn2" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:31.869573 containerd[1466]: 2025-08-13 07:12:31.841 [INFO][4889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-x7dn2" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:31.869573 containerd[1466]: 2025-08-13 07:12:31.844 [INFO][4889] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-x7dn2" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0", GenerateName:"calico-apiserver-7549bfdd56-", Namespace:"calico-apiserver", SelfLink:"", UID:"714ca8a9-a538-482c-a68a-c0ed3f848627", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7549bfdd56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261", Pod:"calico-apiserver-7549bfdd56-x7dn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0881a783633", MAC:"fe:04:45:7f:4d:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:31.869573 containerd[1466]: 2025-08-13 07:12:31.861 [INFO][4889] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261" Namespace="calico-apiserver" Pod="calico-apiserver-7549bfdd56-x7dn2" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:31.953666 containerd[1466]: time="2025-08-13T07:12:31.953549546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:31.953666 containerd[1466]: time="2025-08-13T07:12:31.953626145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:31.954075 containerd[1466]: time="2025-08-13T07:12:31.953963732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:31.954477 containerd[1466]: time="2025-08-13T07:12:31.954423545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:31.998479 systemd[1]: Started cri-containerd-6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261.scope - libcontainer container 6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261. Aug 13 07:12:32.111344 containerd[1466]: time="2025-08-13T07:12:32.109982502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7549bfdd56-x7dn2,Uid:714ca8a9-a538-482c-a68a-c0ed3f848627,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261\"" Aug 13 07:12:32.296317 systemd-networkd[1375]: cali141b38d7cae: Gained IPv6LL Aug 13 07:12:32.679581 systemd-networkd[1375]: cali5b9a974e7d0: Gained IPv6LL Aug 13 07:12:33.525444 containerd[1466]: time="2025-08-13T07:12:33.525379151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:33.527046 containerd[1466]: time="2025-08-13T07:12:33.526928561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:12:33.528231 containerd[1466]: time="2025-08-13T07:12:33.528110529Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:33.533270 containerd[1466]: time="2025-08-13T07:12:33.531966549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:33.533270 containerd[1466]: time="2025-08-13T07:12:33.533096152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.319662394s" Aug 13 07:12:33.533270 containerd[1466]: time="2025-08-13T07:12:33.533139013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:12:33.535078 containerd[1466]: time="2025-08-13T07:12:33.535044357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:12:33.564628 containerd[1466]: time="2025-08-13T07:12:33.564573668Z" level=info msg="CreateContainer within sandbox \"02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:12:33.589367 containerd[1466]: time="2025-08-13T07:12:33.588961953Z" level=info msg="CreateContainer within sandbox \"02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"856ab5704ccdccac94af8bb735e8f042940de31d2d4b5577d1943e18f03b5bea\"" Aug 13 07:12:33.591251 containerd[1466]: time="2025-08-13T07:12:33.590847375Z" level=info msg="StartContainer for \"856ab5704ccdccac94af8bb735e8f042940de31d2d4b5577d1943e18f03b5bea\"" Aug 13 07:12:33.639979 systemd-networkd[1375]: cali0881a783633: Gained IPv6LL Aug 13 07:12:33.644498 systemd[1]: Started cri-containerd-856ab5704ccdccac94af8bb735e8f042940de31d2d4b5577d1943e18f03b5bea.scope - libcontainer container 856ab5704ccdccac94af8bb735e8f042940de31d2d4b5577d1943e18f03b5bea. Aug 13 07:12:33.710345 containerd[1466]: time="2025-08-13T07:12:33.710255842Z" level=info msg="StartContainer for \"856ab5704ccdccac94af8bb735e8f042940de31d2d4b5577d1943e18f03b5bea\" returns successfully" Aug 13 07:12:33.763032 kubelet[2615]: I0813 07:12:33.762986 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:12:33.807402 kubelet[2615]: I0813 07:12:33.805845 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d489544c6-rtpdd" podStartSLOduration=24.493889441 podStartE2EDuration="30.805813412s" podCreationTimestamp="2025-08-13 07:12:03 +0000 UTC" firstStartedPulling="2025-08-13 07:12:27.222841476 +0000 UTC m=+45.213745083" lastFinishedPulling="2025-08-13 07:12:33.534765458 +0000 UTC m=+51.525669054" observedRunningTime="2025-08-13 07:12:33.799892713 +0000 UTC m=+51.790796335" watchObservedRunningTime="2025-08-13 07:12:33.805813412 +0000 UTC m=+51.796717030" Aug 13 07:12:34.817398 systemd[1]: run-containerd-runc-k8s.io-856ab5704ccdccac94af8bb735e8f042940de31d2d4b5577d1943e18f03b5bea-runc.bHnojE.mount: Deactivated successfully. Aug 13 07:12:36.513161 ntpd[1428]: Listen normally on 8 vxlan.calico 192.168.79.0:123 Aug 13 07:12:36.513391 ntpd[1428]: Listen normally on 9 calibf8c2108990 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 07:12:36.579949 ntpd[1428]: 13 Aug 07:12:36 ntpd[1428]: Listen normally on 8 vxlan.calico 192.168.79.0:123 Aug 13 07:12:36.579949 ntpd[1428]: 13 Aug 07:12:36 ntpd[1428]: Listen normally on 9 calibf8c2108990 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 07:12:36.579949 ntpd[1428]: 13 Aug 07:12:36 ntpd[1428]: Listen normally on 10 vxlan.calico [fe80::6427:7ff:fec3:98e6%5]:123 Aug 13 07:12:36.579949 ntpd[1428]: 13 Aug 07:12:36 ntpd[1428]: Listen normally on 11 cali35e20f4bfcc [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 07:12:36.579949 ntpd[1428]: 13 Aug 07:12:36 ntpd[1428]: Listen normally on 12 calie77b8d90e6e [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 07:12:36.579949 ntpd[1428]: 13 Aug 07:12:36 ntpd[1428]: Listen normally on 13 cali99ae51b5a8f [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 07:12:36.579949 ntpd[1428]: 13 Aug 07:12:36 ntpd[1428]: Listen normally on 14 cali0d2403214e9 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 07:12:36.579949 ntpd[1428]: 13 Aug 07:12:36 ntpd[1428]: Listen normally on 15 cali141b38d7cae [fe80::ecee:eeff:feee:eeee%12]:123 Aug 13 07:12:36.579949 ntpd[1428]: 13 Aug 07:12:36 ntpd[1428]: Listen normally on 16 cali5b9a974e7d0 [fe80::ecee:eeff:feee:eeee%13]:123 Aug 13 07:12:36.579949 ntpd[1428]: 13 Aug 07:12:36 ntpd[1428]: Listen normally on 17 cali0881a783633 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 13 07:12:36.513480 ntpd[1428]: Listen normally on 10 vxlan.calico [fe80::6427:7ff:fec3:98e6%5]:123 Aug 13 07:12:36.513552 ntpd[1428]: Listen normally on 11 cali35e20f4bfcc [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 07:12:36.513610 ntpd[1428]: Listen normally on 12 calie77b8d90e6e [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 07:12:36.513668 ntpd[1428]: Listen normally on 13 cali99ae51b5a8f [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 07:12:36.513728 ntpd[1428]: Listen normally on 14 cali0d2403214e9 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 07:12:36.513798 ntpd[1428]: Listen normally on 15 cali141b38d7cae [fe80::ecee:eeff:feee:eeee%12]:123 Aug 13 07:12:36.513859 ntpd[1428]: Listen normally on 16 cali5b9a974e7d0 [fe80::ecee:eeff:feee:eeee%13]:123 Aug 13 07:12:36.513918 ntpd[1428]: Listen normally on 17 cali0881a783633 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 13 07:12:36.892899 containerd[1466]: time="2025-08-13T07:12:36.892619461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:36.894807 containerd[1466]: time="2025-08-13T07:12:36.894732578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:12:36.896233 containerd[1466]: time="2025-08-13T07:12:36.896136250Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:36.899277 containerd[1466]: time="2025-08-13T07:12:36.899205193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:36.900297 containerd[1466]: time="2025-08-13T07:12:36.900257054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.365169717s" Aug 13 07:12:36.900564 containerd[1466]: time="2025-08-13T07:12:36.900421226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:12:36.902060 containerd[1466]: time="2025-08-13T07:12:36.901826976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:12:36.906006 containerd[1466]: time="2025-08-13T07:12:36.905956915Z" level=info msg="CreateContainer within sandbox \"887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:12:36.927282 containerd[1466]: time="2025-08-13T07:12:36.927225788Z" level=info msg="CreateContainer within sandbox \"887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f63738c3e888075fc6a328a2365d5af9da3da1361872d62bd7c2d335194a6696\"" Aug 13 07:12:36.929299 containerd[1466]: time="2025-08-13T07:12:36.928314811Z" level=info msg="StartContainer for \"f63738c3e888075fc6a328a2365d5af9da3da1361872d62bd7c2d335194a6696\"" Aug 13 07:12:36.990459 systemd[1]: Started cri-containerd-f63738c3e888075fc6a328a2365d5af9da3da1361872d62bd7c2d335194a6696.scope - libcontainer container f63738c3e888075fc6a328a2365d5af9da3da1361872d62bd7c2d335194a6696. Aug 13 07:12:37.052232 containerd[1466]: time="2025-08-13T07:12:37.052090906Z" level=info msg="StartContainer for \"f63738c3e888075fc6a328a2365d5af9da3da1361872d62bd7c2d335194a6696\" returns successfully" Aug 13 07:12:37.833635 kubelet[2615]: I0813 07:12:37.833191 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7549bfdd56-dfv7p" podStartSLOduration=31.05179564 podStartE2EDuration="39.833142924s" podCreationTimestamp="2025-08-13 07:11:58 +0000 UTC" firstStartedPulling="2025-08-13 07:12:28.120279568 +0000 UTC m=+46.111183175" lastFinishedPulling="2025-08-13 07:12:36.90162684 +0000 UTC m=+54.892530459" observedRunningTime="2025-08-13 07:12:37.829991158 +0000 UTC m=+55.820894776" watchObservedRunningTime="2025-08-13 07:12:37.833142924 +0000 UTC m=+55.824046519" Aug 13 07:12:38.463248 containerd[1466]: time="2025-08-13T07:12:38.463061373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:38.465864 containerd[1466]: time="2025-08-13T07:12:38.465781157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:12:38.468648 containerd[1466]: time="2025-08-13T07:12:38.468587734Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:38.474435 containerd[1466]: time="2025-08-13T07:12:38.474365524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.57248381s" Aug 13 07:12:38.474726 containerd[1466]: time="2025-08-13T07:12:38.474643696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:12:38.475023 containerd[1466]: time="2025-08-13T07:12:38.474588142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:38.479218 containerd[1466]: time="2025-08-13T07:12:38.479098483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:12:38.486364 containerd[1466]: time="2025-08-13T07:12:38.486321038Z" level=info msg="CreateContainer within sandbox \"7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:12:38.514554 containerd[1466]: time="2025-08-13T07:12:38.514492627Z" level=info msg="CreateContainer within sandbox \"7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"de77b1025e3fcbf67b03504be28ad52b12badae97be9cb67e2ad34a87fdd8fd9\"" Aug 13 07:12:38.518259 containerd[1466]: time="2025-08-13T07:12:38.517077016Z" level=info msg="StartContainer for \"de77b1025e3fcbf67b03504be28ad52b12badae97be9cb67e2ad34a87fdd8fd9\"" Aug 13 07:12:38.520043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount807475117.mount: Deactivated successfully. Aug 13 07:12:38.597450 systemd[1]: Started cri-containerd-de77b1025e3fcbf67b03504be28ad52b12badae97be9cb67e2ad34a87fdd8fd9.scope - libcontainer container de77b1025e3fcbf67b03504be28ad52b12badae97be9cb67e2ad34a87fdd8fd9. Aug 13 07:12:38.666873 containerd[1466]: time="2025-08-13T07:12:38.666821365Z" level=info msg="StartContainer for \"de77b1025e3fcbf67b03504be28ad52b12badae97be9cb67e2ad34a87fdd8fd9\" returns successfully" Aug 13 07:12:38.836641 kubelet[2615]: I0813 07:12:38.835587 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-phmj8" podStartSLOduration=23.541683547 podStartE2EDuration="35.835561361s" podCreationTimestamp="2025-08-13 07:12:03 +0000 UTC" firstStartedPulling="2025-08-13 07:12:26.184347256 +0000 UTC m=+44.175250865" lastFinishedPulling="2025-08-13 07:12:38.478225083 +0000 UTC m=+56.469128679" observedRunningTime="2025-08-13 07:12:38.833953634 +0000 UTC m=+56.824857254" watchObservedRunningTime="2025-08-13 07:12:38.835561361 +0000 UTC m=+56.826464979" Aug 13 07:12:39.471202 kubelet[2615]: I0813 07:12:39.470729 2615 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:12:39.471202 kubelet[2615]: I0813 07:12:39.470974 2615 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:12:39.812432 kubelet[2615]: I0813 07:12:39.812112 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:12:41.445604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2449632987.mount: Deactivated successfully. Aug 13 07:12:42.283072 containerd[1466]: time="2025-08-13T07:12:42.282623609Z" level=info msg="StopPodSandbox for \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\"" Aug 13 07:12:42.657781 containerd[1466]: 2025-08-13 07:12:42.427 [WARNING][5222] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"89495aa1-ee61-41c8-8f49-33992f3f9e26", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896", Pod:"coredns-674b8bbfcf-82cfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali141b38d7cae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:42.657781 containerd[1466]: 2025-08-13 07:12:42.432 [INFO][5222] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Aug 13 07:12:42.657781 containerd[1466]: 2025-08-13 07:12:42.436 [INFO][5222] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" iface="eth0" netns="" Aug 13 07:12:42.657781 containerd[1466]: 2025-08-13 07:12:42.436 [INFO][5222] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Aug 13 07:12:42.657781 containerd[1466]: 2025-08-13 07:12:42.438 [INFO][5222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Aug 13 07:12:42.657781 containerd[1466]: 2025-08-13 07:12:42.597 [INFO][5231] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" HandleID="k8s-pod-network.769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:42.657781 containerd[1466]: 2025-08-13 07:12:42.598 [INFO][5231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:42.657781 containerd[1466]: 2025-08-13 07:12:42.599 [INFO][5231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:42.657781 containerd[1466]: 2025-08-13 07:12:42.635 [WARNING][5231] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" HandleID="k8s-pod-network.769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:42.657781 containerd[1466]: 2025-08-13 07:12:42.635 [INFO][5231] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" HandleID="k8s-pod-network.769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:42.657781 containerd[1466]: 2025-08-13 07:12:42.638 [INFO][5231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:42.657781 containerd[1466]: 2025-08-13 07:12:42.647 [INFO][5222] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Aug 13 07:12:42.660937 containerd[1466]: time="2025-08-13T07:12:42.659119727Z" level=info msg="TearDown network for sandbox \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\" successfully" Aug 13 07:12:42.660937 containerd[1466]: time="2025-08-13T07:12:42.659228822Z" level=info msg="StopPodSandbox for \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\" returns successfully" Aug 13 07:12:42.660937 containerd[1466]: time="2025-08-13T07:12:42.660451561Z" level=info msg="RemovePodSandbox for \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\"" Aug 13 07:12:42.660937 containerd[1466]: time="2025-08-13T07:12:42.660493367Z" level=info msg="Forcibly stopping sandbox \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\"" Aug 13 07:12:43.009205 containerd[1466]: 2025-08-13 07:12:42.823 [WARNING][5246] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"89495aa1-ee61-41c8-8f49-33992f3f9e26", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"b6c96ef06c95734b86047a9eed4367bbd02299a0622087798e8c05f4ce19e896", Pod:"coredns-674b8bbfcf-82cfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali141b38d7cae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:43.009205 containerd[1466]: 2025-08-13 07:12:42.825 [INFO][5246] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Aug 13 07:12:43.009205 containerd[1466]: 2025-08-13 07:12:42.825 [INFO][5246] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" iface="eth0" netns="" Aug 13 07:12:43.009205 containerd[1466]: 2025-08-13 07:12:42.825 [INFO][5246] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Aug 13 07:12:43.009205 containerd[1466]: 2025-08-13 07:12:42.825 [INFO][5246] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Aug 13 07:12:43.009205 containerd[1466]: 2025-08-13 07:12:42.962 [INFO][5255] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" HandleID="k8s-pod-network.769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:43.009205 containerd[1466]: 2025-08-13 07:12:42.963 [INFO][5255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:43.009205 containerd[1466]: 2025-08-13 07:12:42.963 [INFO][5255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:43.009205 containerd[1466]: 2025-08-13 07:12:42.992 [WARNING][5255] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" HandleID="k8s-pod-network.769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:43.009205 containerd[1466]: 2025-08-13 07:12:42.992 [INFO][5255] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" HandleID="k8s-pod-network.769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--82cfm-eth0" Aug 13 07:12:43.009205 containerd[1466]: 2025-08-13 07:12:42.994 [INFO][5255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:43.009205 containerd[1466]: 2025-08-13 07:12:43.000 [INFO][5246] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8" Aug 13 07:12:43.009205 containerd[1466]: time="2025-08-13T07:12:43.009037072Z" level=info msg="TearDown network for sandbox \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\" successfully" Aug 13 07:12:43.024222 containerd[1466]: time="2025-08-13T07:12:43.022796790Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:12:43.024222 containerd[1466]: time="2025-08-13T07:12:43.022909074Z" level=info msg="RemovePodSandbox \"769cc67af2d9576dd1510091cff8792bf1077a8386097a9bcaa77267813c3ec8\" returns successfully" Aug 13 07:12:43.024222 containerd[1466]: time="2025-08-13T07:12:43.023903008Z" level=info msg="StopPodSandbox for \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\"" Aug 13 07:12:43.179222 containerd[1466]: time="2025-08-13T07:12:43.178587720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:43.185359 containerd[1466]: time="2025-08-13T07:12:43.185281517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:12:43.187376 containerd[1466]: time="2025-08-13T07:12:43.187322004Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:43.194691 containerd[1466]: time="2025-08-13T07:12:43.194520331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.715373273s" Aug 13 07:12:43.194691 containerd[1466]: time="2025-08-13T07:12:43.194578639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:12:43.200220 containerd[1466]: time="2025-08-13T07:12:43.197834261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:43.201447 containerd[1466]: time="2025-08-13T07:12:43.201408416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:12:43.208760 containerd[1466]: time="2025-08-13T07:12:43.208717315Z" level=info msg="CreateContainer within sandbox \"be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:12:43.241019 containerd[1466]: time="2025-08-13T07:12:43.240966144Z" level=info msg="CreateContainer within sandbox \"be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9c79a1de69905567aaf4a5fa174ae7fccac709eab83ad266cbeec379137116b6\"" Aug 13 07:12:43.243070 containerd[1466]: time="2025-08-13T07:12:43.242455453Z" level=info msg="StartContainer for \"9c79a1de69905567aaf4a5fa174ae7fccac709eab83ad266cbeec379137116b6\"" Aug 13 07:12:43.302422 containerd[1466]: 2025-08-13 07:12:43.150 [WARNING][5270] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"824f485a-87b0-420d-9975-44d490a376b1", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00", Pod:"coredns-674b8bbfcf-cmvkg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie77b8d90e6e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:43.302422 containerd[1466]: 2025-08-13 07:12:43.150 [INFO][5270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Aug 13 07:12:43.302422 containerd[1466]: 2025-08-13 07:12:43.150 [INFO][5270] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" iface="eth0" netns="" Aug 13 07:12:43.302422 containerd[1466]: 2025-08-13 07:12:43.150 [INFO][5270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Aug 13 07:12:43.302422 containerd[1466]: 2025-08-13 07:12:43.150 [INFO][5270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Aug 13 07:12:43.302422 containerd[1466]: 2025-08-13 07:12:43.231 [INFO][5277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" HandleID="k8s-pod-network.23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:43.302422 containerd[1466]: 2025-08-13 07:12:43.244 [INFO][5277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:43.302422 containerd[1466]: 2025-08-13 07:12:43.244 [INFO][5277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:43.302422 containerd[1466]: 2025-08-13 07:12:43.277 [WARNING][5277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" HandleID="k8s-pod-network.23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:43.302422 containerd[1466]: 2025-08-13 07:12:43.278 [INFO][5277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" HandleID="k8s-pod-network.23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:43.302422 containerd[1466]: 2025-08-13 07:12:43.289 [INFO][5277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:43.302422 containerd[1466]: 2025-08-13 07:12:43.296 [INFO][5270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Aug 13 07:12:43.302422 containerd[1466]: time="2025-08-13T07:12:43.302361427Z" level=info msg="TearDown network for sandbox \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\" successfully" Aug 13 07:12:43.302422 containerd[1466]: time="2025-08-13T07:12:43.302399010Z" level=info msg="StopPodSandbox for \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\" returns successfully" Aug 13 07:12:43.311153 containerd[1466]: time="2025-08-13T07:12:43.308881470Z" level=info msg="RemovePodSandbox for \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\"" Aug 13 07:12:43.311153 containerd[1466]: time="2025-08-13T07:12:43.309085432Z" level=info msg="Forcibly stopping sandbox \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\"" Aug 13 07:12:43.373052 systemd[1]: run-containerd-runc-k8s.io-9c79a1de69905567aaf4a5fa174ae7fccac709eab83ad266cbeec379137116b6-runc.8UsxVu.mount: Deactivated successfully. Aug 13 07:12:43.387458 systemd[1]: Started cri-containerd-9c79a1de69905567aaf4a5fa174ae7fccac709eab83ad266cbeec379137116b6.scope - libcontainer container 9c79a1de69905567aaf4a5fa174ae7fccac709eab83ad266cbeec379137116b6. Aug 13 07:12:43.545240 containerd[1466]: time="2025-08-13T07:12:43.545136946Z" level=info msg="StartContainer for \"9c79a1de69905567aaf4a5fa174ae7fccac709eab83ad266cbeec379137116b6\" returns successfully" Aug 13 07:12:43.550837 containerd[1466]: 2025-08-13 07:12:43.450 [WARNING][5309] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"824f485a-87b0-420d-9975-44d490a376b1", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"cefca606cdef38fbff6c3480c5bb51513d246db2bdce4031f6172cfe3fb08d00", Pod:"coredns-674b8bbfcf-cmvkg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie77b8d90e6e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:43.550837 containerd[1466]: 2025-08-13 07:12:43.451 [INFO][5309] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Aug 13 07:12:43.550837 containerd[1466]: 2025-08-13 07:12:43.451 [INFO][5309] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" iface="eth0" netns="" Aug 13 07:12:43.550837 containerd[1466]: 2025-08-13 07:12:43.452 [INFO][5309] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Aug 13 07:12:43.550837 containerd[1466]: 2025-08-13 07:12:43.452 [INFO][5309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Aug 13 07:12:43.550837 containerd[1466]: 2025-08-13 07:12:43.523 [INFO][5327] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" HandleID="k8s-pod-network.23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:43.550837 containerd[1466]: 2025-08-13 07:12:43.523 [INFO][5327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:43.550837 containerd[1466]: 2025-08-13 07:12:43.523 [INFO][5327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:43.550837 containerd[1466]: 2025-08-13 07:12:43.539 [WARNING][5327] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" HandleID="k8s-pod-network.23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:43.550837 containerd[1466]: 2025-08-13 07:12:43.539 [INFO][5327] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" HandleID="k8s-pod-network.23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--cmvkg-eth0" Aug 13 07:12:43.550837 containerd[1466]: 2025-08-13 07:12:43.544 [INFO][5327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:43.550837 containerd[1466]: 2025-08-13 07:12:43.548 [INFO][5309] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254" Aug 13 07:12:43.552656 containerd[1466]: time="2025-08-13T07:12:43.550999125Z" level=info msg="TearDown network for sandbox \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\" successfully" Aug 13 07:12:43.563959 containerd[1466]: time="2025-08-13T07:12:43.563875684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:12:43.564139 containerd[1466]: time="2025-08-13T07:12:43.564006958Z" level=info msg="RemovePodSandbox \"23803e6503aa38f9145f36c2a66373186263dc1f3b5842c7737d8e7ea0620254\" returns successfully" Aug 13 07:12:43.564792 containerd[1466]: time="2025-08-13T07:12:43.564750664Z" level=info msg="StopPodSandbox for \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\"" Aug 13 07:12:43.581855 containerd[1466]: time="2025-08-13T07:12:43.581798054Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:43.583439 containerd[1466]: time="2025-08-13T07:12:43.582893267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:12:43.593039 containerd[1466]: time="2025-08-13T07:12:43.592919158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 391.300467ms" Aug 13 07:12:43.593252 containerd[1466]: time="2025-08-13T07:12:43.593038216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:12:43.600396 containerd[1466]: time="2025-08-13T07:12:43.600345916Z" level=info msg="CreateContainer within sandbox \"6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:12:43.637742 containerd[1466]: time="2025-08-13T07:12:43.635492499Z" level=info msg="CreateContainer within sandbox \"6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"528b2ac9dda668513a255c6e543168b53bf500bcd851481b209afba1980e0028\"" Aug 13 07:12:43.637742 containerd[1466]: time="2025-08-13T07:12:43.636784968Z" level=info msg="StartContainer for \"528b2ac9dda668513a255c6e543168b53bf500bcd851481b209afba1980e0028\"" Aug 13 07:12:43.694434 systemd[1]: Started cri-containerd-528b2ac9dda668513a255c6e543168b53bf500bcd851481b209afba1980e0028.scope - libcontainer container 528b2ac9dda668513a255c6e543168b53bf500bcd851481b209afba1980e0028. Aug 13 07:12:43.991192 kubelet[2615]: I0813 07:12:43.990998 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-5mqk4" podStartSLOduration=28.953269944 podStartE2EDuration="40.990974491s" podCreationTimestamp="2025-08-13 07:12:03 +0000 UTC" firstStartedPulling="2025-08-13 07:12:31.161683106 +0000 UTC m=+49.152586714" lastFinishedPulling="2025-08-13 07:12:43.199387647 +0000 UTC m=+61.190291261" observedRunningTime="2025-08-13 07:12:43.984508408 +0000 UTC m=+61.975412026" watchObservedRunningTime="2025-08-13 07:12:43.990974491 +0000 UTC m=+61.981878128" Aug 13 07:12:43.994729 containerd[1466]: time="2025-08-13T07:12:43.994563442Z" level=info msg="StartContainer for \"528b2ac9dda668513a255c6e543168b53bf500bcd851481b209afba1980e0028\" returns successfully" Aug 13 07:12:43.995932 containerd[1466]: 2025-08-13 07:12:43.782 [WARNING][5350] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--79f55c7ffd--cs8ql-eth0" Aug 13 07:12:43.995932 containerd[1466]: 2025-08-13 07:12:43.782 [INFO][5350] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Aug 13 07:12:43.995932 containerd[1466]: 2025-08-13 07:12:43.782 [INFO][5350] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" iface="eth0" netns="" Aug 13 07:12:43.995932 containerd[1466]: 2025-08-13 07:12:43.783 [INFO][5350] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Aug 13 07:12:43.995932 containerd[1466]: 2025-08-13 07:12:43.783 [INFO][5350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Aug 13 07:12:43.995932 containerd[1466]: 2025-08-13 07:12:43.827 [INFO][5384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" HandleID="k8s-pod-network.8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--79f55c7ffd--cs8ql-eth0" Aug 13 07:12:43.995932 containerd[1466]: 2025-08-13 07:12:43.829 [INFO][5384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:43.995932 containerd[1466]: 2025-08-13 07:12:43.829 [INFO][5384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:43.995932 containerd[1466]: 2025-08-13 07:12:43.929 [WARNING][5384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" HandleID="k8s-pod-network.8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--79f55c7ffd--cs8ql-eth0" Aug 13 07:12:43.995932 containerd[1466]: 2025-08-13 07:12:43.930 [INFO][5384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" HandleID="k8s-pod-network.8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--79f55c7ffd--cs8ql-eth0" Aug 13 07:12:43.995932 containerd[1466]: 2025-08-13 07:12:43.975 [INFO][5384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:43.995932 containerd[1466]: 2025-08-13 07:12:43.983 [INFO][5350] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Aug 13 07:12:43.997016 containerd[1466]: time="2025-08-13T07:12:43.996806619Z" level=info msg="TearDown network for sandbox \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\" successfully" Aug 13 07:12:43.997016 containerd[1466]: time="2025-08-13T07:12:43.996860526Z" level=info msg="StopPodSandbox for \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\" returns successfully" Aug 13 07:12:43.999896 containerd[1466]: time="2025-08-13T07:12:43.999839912Z" level=info msg="RemovePodSandbox for \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\"" Aug 13 07:12:44.000127 containerd[1466]: time="2025-08-13T07:12:43.999994222Z" level=info msg="Forcibly stopping sandbox \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\"" Aug 13 07:12:44.181310 containerd[1466]: 2025-08-13 07:12:44.101 [WARNING][5423] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" WorkloadEndpoint="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--79f55c7ffd--cs8ql-eth0" Aug 13 07:12:44.181310 containerd[1466]: 2025-08-13 07:12:44.102 [INFO][5423] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Aug 13 07:12:44.181310 containerd[1466]: 2025-08-13 07:12:44.102 [INFO][5423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" iface="eth0" netns="" Aug 13 07:12:44.181310 containerd[1466]: 2025-08-13 07:12:44.102 [INFO][5423] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Aug 13 07:12:44.181310 containerd[1466]: 2025-08-13 07:12:44.102 [INFO][5423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Aug 13 07:12:44.181310 containerd[1466]: 2025-08-13 07:12:44.146 [INFO][5439] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" HandleID="k8s-pod-network.8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--79f55c7ffd--cs8ql-eth0" Aug 13 07:12:44.181310 containerd[1466]: 2025-08-13 07:12:44.146 [INFO][5439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:44.181310 containerd[1466]: 2025-08-13 07:12:44.146 [INFO][5439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:44.181310 containerd[1466]: 2025-08-13 07:12:44.173 [WARNING][5439] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" HandleID="k8s-pod-network.8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--79f55c7ffd--cs8ql-eth0" Aug 13 07:12:44.181310 containerd[1466]: 2025-08-13 07:12:44.173 [INFO][5439] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" HandleID="k8s-pod-network.8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-whisker--79f55c7ffd--cs8ql-eth0" Aug 13 07:12:44.181310 containerd[1466]: 2025-08-13 07:12:44.176 [INFO][5439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:44.181310 containerd[1466]: 2025-08-13 07:12:44.179 [INFO][5423] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084" Aug 13 07:12:44.183326 containerd[1466]: time="2025-08-13T07:12:44.182141674Z" level=info msg="TearDown network for sandbox \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\" successfully" Aug 13 07:12:44.189122 containerd[1466]: time="2025-08-13T07:12:44.189063070Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:12:44.189494 containerd[1466]: time="2025-08-13T07:12:44.189464494Z" level=info msg="RemovePodSandbox \"8a68459d5292668665c18f125ecf5fc4650d111165b5b9b8a67e4b85ff563084\" returns successfully" Aug 13 07:12:44.190336 containerd[1466]: time="2025-08-13T07:12:44.190304473Z" level=info msg="StopPodSandbox for \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\"" Aug 13 07:12:44.377946 containerd[1466]: 2025-08-13 07:12:44.303 [WARNING][5454] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0", GenerateName:"calico-kube-controllers-d489544c6-", Namespace:"calico-system", SelfLink:"", UID:"9dd82a74-cb79-405c-9c40-0ccdbc701a0f", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d489544c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f", Pod:"calico-kube-controllers-d489544c6-rtpdd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99ae51b5a8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:44.377946 containerd[1466]: 2025-08-13 07:12:44.304 [INFO][5454] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Aug 13 07:12:44.377946 containerd[1466]: 2025-08-13 07:12:44.304 [INFO][5454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" iface="eth0" netns="" Aug 13 07:12:44.377946 containerd[1466]: 2025-08-13 07:12:44.304 [INFO][5454] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Aug 13 07:12:44.377946 containerd[1466]: 2025-08-13 07:12:44.304 [INFO][5454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Aug 13 07:12:44.377946 containerd[1466]: 2025-08-13 07:12:44.357 [INFO][5462] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" HandleID="k8s-pod-network.598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:44.377946 containerd[1466]: 2025-08-13 07:12:44.358 [INFO][5462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:44.377946 containerd[1466]: 2025-08-13 07:12:44.358 [INFO][5462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:44.377946 containerd[1466]: 2025-08-13 07:12:44.369 [WARNING][5462] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" HandleID="k8s-pod-network.598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:44.377946 containerd[1466]: 2025-08-13 07:12:44.369 [INFO][5462] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" HandleID="k8s-pod-network.598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:44.377946 containerd[1466]: 2025-08-13 07:12:44.371 [INFO][5462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:44.377946 containerd[1466]: 2025-08-13 07:12:44.374 [INFO][5454] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Aug 13 07:12:44.382244 containerd[1466]: time="2025-08-13T07:12:44.381894157Z" level=info msg="TearDown network for sandbox \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\" successfully" Aug 13 07:12:44.382244 containerd[1466]: time="2025-08-13T07:12:44.381963576Z" level=info msg="StopPodSandbox for \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\" returns successfully" Aug 13 07:12:44.384330 containerd[1466]: time="2025-08-13T07:12:44.383840126Z" level=info msg="RemovePodSandbox for \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\"" Aug 13 07:12:44.384330 containerd[1466]: time="2025-08-13T07:12:44.383894668Z" level=info msg="Forcibly stopping sandbox \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\"" Aug 13 07:12:44.528513 containerd[1466]: 2025-08-13 07:12:44.461 [WARNING][5480] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0", GenerateName:"calico-kube-controllers-d489544c6-", Namespace:"calico-system", SelfLink:"", UID:"9dd82a74-cb79-405c-9c40-0ccdbc701a0f", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d489544c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"02be8f78c65ea8e77cd506aea53c7d5b4a0a0beea8e9961595027bf5b0522d4f", Pod:"calico-kube-controllers-d489544c6-rtpdd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99ae51b5a8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:44.528513 containerd[1466]: 2025-08-13 07:12:44.461 [INFO][5480] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Aug 13 07:12:44.528513 containerd[1466]: 2025-08-13 07:12:44.461 [INFO][5480] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" iface="eth0" netns="" Aug 13 07:12:44.528513 containerd[1466]: 2025-08-13 07:12:44.461 [INFO][5480] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Aug 13 07:12:44.528513 containerd[1466]: 2025-08-13 07:12:44.461 [INFO][5480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Aug 13 07:12:44.528513 containerd[1466]: 2025-08-13 07:12:44.506 [INFO][5488] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" HandleID="k8s-pod-network.598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:44.528513 containerd[1466]: 2025-08-13 07:12:44.508 [INFO][5488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:44.528513 containerd[1466]: 2025-08-13 07:12:44.508 [INFO][5488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:44.528513 containerd[1466]: 2025-08-13 07:12:44.520 [WARNING][5488] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" HandleID="k8s-pod-network.598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:44.528513 containerd[1466]: 2025-08-13 07:12:44.520 [INFO][5488] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" HandleID="k8s-pod-network.598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--kube--controllers--d489544c6--rtpdd-eth0" Aug 13 07:12:44.528513 containerd[1466]: 2025-08-13 07:12:44.523 [INFO][5488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:44.528513 containerd[1466]: 2025-08-13 07:12:44.525 [INFO][5480] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9" Aug 13 07:12:44.530148 containerd[1466]: time="2025-08-13T07:12:44.529217728Z" level=info msg="TearDown network for sandbox \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\" successfully" Aug 13 07:12:44.536595 containerd[1466]: time="2025-08-13T07:12:44.536388692Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:12:44.536595 containerd[1466]: time="2025-08-13T07:12:44.536536306Z" level=info msg="RemovePodSandbox \"598e85e0736aca6dd6306b8323fb3162de4f8c208c0ad76e44244176608260f9\" returns successfully" Aug 13 07:12:44.538902 containerd[1466]: time="2025-08-13T07:12:44.537748901Z" level=info msg="StopPodSandbox for \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\"" Aug 13 07:12:44.709741 containerd[1466]: 2025-08-13 07:12:44.614 [WARNING][5504] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0", GenerateName:"calico-apiserver-7549bfdd56-", Namespace:"calico-apiserver", SelfLink:"", UID:"714ca8a9-a538-482c-a68a-c0ed3f848627", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7549bfdd56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261", Pod:"calico-apiserver-7549bfdd56-x7dn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0881a783633", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:44.709741 containerd[1466]: 2025-08-13 07:12:44.615 [INFO][5504] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Aug 13 07:12:44.709741 containerd[1466]: 2025-08-13 07:12:44.615 [INFO][5504] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" iface="eth0" netns="" Aug 13 07:12:44.709741 containerd[1466]: 2025-08-13 07:12:44.615 [INFO][5504] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Aug 13 07:12:44.709741 containerd[1466]: 2025-08-13 07:12:44.615 [INFO][5504] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Aug 13 07:12:44.709741 containerd[1466]: 2025-08-13 07:12:44.688 [INFO][5513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" HandleID="k8s-pod-network.dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:44.709741 containerd[1466]: 2025-08-13 07:12:44.688 [INFO][5513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:44.709741 containerd[1466]: 2025-08-13 07:12:44.688 [INFO][5513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:44.709741 containerd[1466]: 2025-08-13 07:12:44.702 [WARNING][5513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" HandleID="k8s-pod-network.dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:44.709741 containerd[1466]: 2025-08-13 07:12:44.702 [INFO][5513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" HandleID="k8s-pod-network.dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:44.709741 containerd[1466]: 2025-08-13 07:12:44.705 [INFO][5513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:44.709741 containerd[1466]: 2025-08-13 07:12:44.707 [INFO][5504] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Aug 13 07:12:44.710611 containerd[1466]: time="2025-08-13T07:12:44.709807654Z" level=info msg="TearDown network for sandbox \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\" successfully" Aug 13 07:12:44.710611 containerd[1466]: time="2025-08-13T07:12:44.709842902Z" level=info msg="StopPodSandbox for \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\" returns successfully" Aug 13 07:12:44.711550 containerd[1466]: time="2025-08-13T07:12:44.710827475Z" level=info msg="RemovePodSandbox for \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\"" Aug 13 07:12:44.711550 containerd[1466]: time="2025-08-13T07:12:44.710871875Z" level=info msg="Forcibly stopping sandbox \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\"" Aug 13 07:12:44.864098 containerd[1466]: 2025-08-13 07:12:44.784 [WARNING][5527] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0", GenerateName:"calico-apiserver-7549bfdd56-", Namespace:"calico-apiserver", SelfLink:"", UID:"714ca8a9-a538-482c-a68a-c0ed3f848627", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7549bfdd56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"6d776cb0269f965853447b55254cf58b5377c0ffb287e31895a340d1a7278261", Pod:"calico-apiserver-7549bfdd56-x7dn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0881a783633", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:44.864098 containerd[1466]: 2025-08-13 07:12:44.784 [INFO][5527] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Aug 13 07:12:44.864098 containerd[1466]: 2025-08-13 07:12:44.784 [INFO][5527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" iface="eth0" netns="" Aug 13 07:12:44.864098 containerd[1466]: 2025-08-13 07:12:44.784 [INFO][5527] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Aug 13 07:12:44.864098 containerd[1466]: 2025-08-13 07:12:44.784 [INFO][5527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Aug 13 07:12:44.864098 containerd[1466]: 2025-08-13 07:12:44.841 [INFO][5534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" HandleID="k8s-pod-network.dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:44.864098 containerd[1466]: 2025-08-13 07:12:44.842 [INFO][5534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:44.864098 containerd[1466]: 2025-08-13 07:12:44.843 [INFO][5534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:44.864098 containerd[1466]: 2025-08-13 07:12:44.856 [WARNING][5534] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" HandleID="k8s-pod-network.dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:44.864098 containerd[1466]: 2025-08-13 07:12:44.856 [INFO][5534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" HandleID="k8s-pod-network.dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--x7dn2-eth0" Aug 13 07:12:44.864098 containerd[1466]: 2025-08-13 07:12:44.858 [INFO][5534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:44.864098 containerd[1466]: 2025-08-13 07:12:44.861 [INFO][5527] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614" Aug 13 07:12:44.864937 containerd[1466]: time="2025-08-13T07:12:44.864128380Z" level=info msg="TearDown network for sandbox \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\" successfully" Aug 13 07:12:44.869646 containerd[1466]: time="2025-08-13T07:12:44.869574014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:12:44.869847 containerd[1466]: time="2025-08-13T07:12:44.869727050Z" level=info msg="RemovePodSandbox \"dadcfad8d552120f0c02ad644ed0f1f94a0ef77b48797e69e5cd1773dfdee614\" returns successfully" Aug 13 07:12:44.870476 containerd[1466]: time="2025-08-13T07:12:44.870444783Z" level=info msg="StopPodSandbox for \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\"" Aug 13 07:12:44.963024 kubelet[2615]: I0813 07:12:44.962827 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7549bfdd56-x7dn2" podStartSLOduration=35.481991684 podStartE2EDuration="46.96280273s" podCreationTimestamp="2025-08-13 07:11:58 +0000 UTC" firstStartedPulling="2025-08-13 07:12:32.113856281 +0000 UTC m=+50.104759882" lastFinishedPulling="2025-08-13 07:12:43.59466732 +0000 UTC m=+61.585570928" observedRunningTime="2025-08-13 07:12:44.958801356 +0000 UTC m=+62.949704976" watchObservedRunningTime="2025-08-13 07:12:44.96280273 +0000 UTC m=+62.953706350" Aug 13 07:12:45.085378 containerd[1466]: 2025-08-13 07:12:45.017 [WARNING][5548] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0", GenerateName:"calico-apiserver-7549bfdd56-", Namespace:"calico-apiserver", SelfLink:"", UID:"119aced9-7cb0-4d80-8364-7168248c339c", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7549bfdd56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048", Pod:"calico-apiserver-7549bfdd56-dfv7p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d2403214e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:45.085378 containerd[1466]: 2025-08-13 07:12:45.017 [INFO][5548] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Aug 13 07:12:45.085378 containerd[1466]: 2025-08-13 07:12:45.017 [INFO][5548] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" iface="eth0" netns="" Aug 13 07:12:45.085378 containerd[1466]: 2025-08-13 07:12:45.017 [INFO][5548] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Aug 13 07:12:45.085378 containerd[1466]: 2025-08-13 07:12:45.017 [INFO][5548] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Aug 13 07:12:45.085378 containerd[1466]: 2025-08-13 07:12:45.063 [INFO][5571] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" HandleID="k8s-pod-network.5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:45.085378 containerd[1466]: 2025-08-13 07:12:45.065 [INFO][5571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:45.085378 containerd[1466]: 2025-08-13 07:12:45.065 [INFO][5571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:45.085378 containerd[1466]: 2025-08-13 07:12:45.077 [WARNING][5571] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" HandleID="k8s-pod-network.5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:45.085378 containerd[1466]: 2025-08-13 07:12:45.077 [INFO][5571] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" HandleID="k8s-pod-network.5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:45.085378 containerd[1466]: 2025-08-13 07:12:45.079 [INFO][5571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:45.085378 containerd[1466]: 2025-08-13 07:12:45.083 [INFO][5548] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Aug 13 07:12:45.088001 containerd[1466]: time="2025-08-13T07:12:45.085445963Z" level=info msg="TearDown network for sandbox \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\" successfully" Aug 13 07:12:45.088001 containerd[1466]: time="2025-08-13T07:12:45.085478981Z" level=info msg="StopPodSandbox for \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\" returns successfully" Aug 13 07:12:45.088001 containerd[1466]: time="2025-08-13T07:12:45.086120973Z" level=info msg="RemovePodSandbox for \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\"" Aug 13 07:12:45.088001 containerd[1466]: time="2025-08-13T07:12:45.086161368Z" level=info msg="Forcibly stopping sandbox \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\"" Aug 13 07:12:45.299385 containerd[1466]: 2025-08-13 07:12:45.161 [WARNING][5586] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0", GenerateName:"calico-apiserver-7549bfdd56-", Namespace:"calico-apiserver", SelfLink:"", UID:"119aced9-7cb0-4d80-8364-7168248c339c", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 11, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7549bfdd56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"887a58d50236b88113c760ae26972081914e6267d08d90ef6fe16d8459f79048", Pod:"calico-apiserver-7549bfdd56-dfv7p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d2403214e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:45.299385 containerd[1466]: 2025-08-13 07:12:45.162 [INFO][5586] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Aug 13 07:12:45.299385 containerd[1466]: 2025-08-13 07:12:45.162 [INFO][5586] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" iface="eth0" netns="" Aug 13 07:12:45.299385 containerd[1466]: 2025-08-13 07:12:45.162 [INFO][5586] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Aug 13 07:12:45.299385 containerd[1466]: 2025-08-13 07:12:45.162 [INFO][5586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Aug 13 07:12:45.299385 containerd[1466]: 2025-08-13 07:12:45.247 [INFO][5593] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" HandleID="k8s-pod-network.5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:45.299385 containerd[1466]: 2025-08-13 07:12:45.248 [INFO][5593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:45.299385 containerd[1466]: 2025-08-13 07:12:45.248 [INFO][5593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:45.299385 containerd[1466]: 2025-08-13 07:12:45.291 [WARNING][5593] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" HandleID="k8s-pod-network.5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:45.299385 containerd[1466]: 2025-08-13 07:12:45.291 [INFO][5593] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" HandleID="k8s-pod-network.5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-calico--apiserver--7549bfdd56--dfv7p-eth0" Aug 13 07:12:45.299385 containerd[1466]: 2025-08-13 07:12:45.295 [INFO][5593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:45.299385 containerd[1466]: 2025-08-13 07:12:45.297 [INFO][5586] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1" Aug 13 07:12:45.299385 containerd[1466]: time="2025-08-13T07:12:45.299367629Z" level=info msg="TearDown network for sandbox \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\" successfully" Aug 13 07:12:45.305420 containerd[1466]: time="2025-08-13T07:12:45.305352110Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:12:45.305629 containerd[1466]: time="2025-08-13T07:12:45.305492864Z" level=info msg="RemovePodSandbox \"5d927eaff9e38a33ae76aa5ccd31e19a78e894b4ef4e5f3cc0e134eaa12802d1\" returns successfully" Aug 13 07:12:45.306704 containerd[1466]: time="2025-08-13T07:12:45.306669104Z" level=info msg="StopPodSandbox for \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\"" Aug 13 07:12:45.668379 containerd[1466]: 2025-08-13 07:12:45.418 [WARNING][5613] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1cb2b53d-86e2-4e78-ad45-c6b1da7fe653", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b", Pod:"goldmane-768f4c5c69-5mqk4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b9a974e7d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:45.668379 containerd[1466]: 2025-08-13 07:12:45.419 [INFO][5613] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Aug 13 07:12:45.668379 containerd[1466]: 2025-08-13 07:12:45.419 [INFO][5613] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" iface="eth0" netns="" Aug 13 07:12:45.668379 containerd[1466]: 2025-08-13 07:12:45.419 [INFO][5613] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Aug 13 07:12:45.668379 containerd[1466]: 2025-08-13 07:12:45.419 [INFO][5613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Aug 13 07:12:45.668379 containerd[1466]: 2025-08-13 07:12:45.491 [INFO][5623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" HandleID="k8s-pod-network.3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:45.668379 containerd[1466]: 2025-08-13 07:12:45.491 [INFO][5623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:45.668379 containerd[1466]: 2025-08-13 07:12:45.492 [INFO][5623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:45.668379 containerd[1466]: 2025-08-13 07:12:45.607 [WARNING][5623] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" HandleID="k8s-pod-network.3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:45.668379 containerd[1466]: 2025-08-13 07:12:45.609 [INFO][5623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" HandleID="k8s-pod-network.3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:45.668379 containerd[1466]: 2025-08-13 07:12:45.662 [INFO][5623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:45.668379 containerd[1466]: 2025-08-13 07:12:45.664 [INFO][5613] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Aug 13 07:12:45.669676 containerd[1466]: time="2025-08-13T07:12:45.668371682Z" level=info msg="TearDown network for sandbox \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\" successfully" Aug 13 07:12:45.669676 containerd[1466]: time="2025-08-13T07:12:45.668409868Z" level=info msg="StopPodSandbox for \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\" returns successfully" Aug 13 07:12:45.669676 containerd[1466]: time="2025-08-13T07:12:45.669082246Z" level=info msg="RemovePodSandbox for \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\"" Aug 13 07:12:45.669676 containerd[1466]: time="2025-08-13T07:12:45.669120895Z" level=info msg="Forcibly stopping sandbox \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\"" Aug 13 07:12:45.929016 kubelet[2615]: I0813 07:12:45.928862 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:12:45.976803 containerd[1466]: 2025-08-13 07:12:45.868 [WARNING][5641] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1cb2b53d-86e2-4e78-ad45-c6b1da7fe653", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"be9091dee94c5276d5b9c3ea9df3ea4263719fbf66416d777161bce8279aa71b", Pod:"goldmane-768f4c5c69-5mqk4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b9a974e7d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:45.976803 containerd[1466]: 2025-08-13 07:12:45.868 [INFO][5641] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Aug 13 07:12:45.976803 containerd[1466]: 2025-08-13 07:12:45.868 [INFO][5641] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" iface="eth0" netns="" Aug 13 07:12:45.976803 containerd[1466]: 2025-08-13 07:12:45.868 [INFO][5641] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Aug 13 07:12:45.976803 containerd[1466]: 2025-08-13 07:12:45.868 [INFO][5641] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Aug 13 07:12:45.976803 containerd[1466]: 2025-08-13 07:12:45.939 [INFO][5648] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" HandleID="k8s-pod-network.3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:45.976803 containerd[1466]: 2025-08-13 07:12:45.940 [INFO][5648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:45.976803 containerd[1466]: 2025-08-13 07:12:45.940 [INFO][5648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:45.976803 containerd[1466]: 2025-08-13 07:12:45.962 [WARNING][5648] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" HandleID="k8s-pod-network.3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:45.976803 containerd[1466]: 2025-08-13 07:12:45.963 [INFO][5648] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" HandleID="k8s-pod-network.3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--5mqk4-eth0" Aug 13 07:12:45.976803 containerd[1466]: 2025-08-13 07:12:45.967 [INFO][5648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:45.976803 containerd[1466]: 2025-08-13 07:12:45.972 [INFO][5641] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3" Aug 13 07:12:45.976803 containerd[1466]: time="2025-08-13T07:12:45.976630663Z" level=info msg="TearDown network for sandbox \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\" successfully" Aug 13 07:12:45.990894 containerd[1466]: time="2025-08-13T07:12:45.990796235Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:12:45.991042 containerd[1466]: time="2025-08-13T07:12:45.990936591Z" level=info msg="RemovePodSandbox \"3fcc744c2dfd1b14a48eec2d8fbdbeb77fd47935c074aac6df0608fa546bfdb3\" returns successfully" Aug 13 07:12:45.991623 systemd[1]: Started sshd@9-10.128.0.53:22-139.178.68.195:40102.service - OpenSSH per-connection server daemon (139.178.68.195:40102). Aug 13 07:12:45.995933 containerd[1466]: time="2025-08-13T07:12:45.995891460Z" level=info msg="StopPodSandbox for \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\"" Aug 13 07:12:46.175084 containerd[1466]: 2025-08-13 07:12:46.091 [WARNING][5664] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68f698fa-855d-4911-b071-8b5d910e948d", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf", Pod:"csi-node-driver-phmj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali35e20f4bfcc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:46.175084 containerd[1466]: 2025-08-13 07:12:46.092 [INFO][5664] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Aug 13 07:12:46.175084 containerd[1466]: 2025-08-13 07:12:46.092 [INFO][5664] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" iface="eth0" netns="" Aug 13 07:12:46.175084 containerd[1466]: 2025-08-13 07:12:46.092 [INFO][5664] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Aug 13 07:12:46.175084 containerd[1466]: 2025-08-13 07:12:46.092 [INFO][5664] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Aug 13 07:12:46.175084 containerd[1466]: 2025-08-13 07:12:46.154 [INFO][5673] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" HandleID="k8s-pod-network.3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:46.175084 containerd[1466]: 2025-08-13 07:12:46.155 [INFO][5673] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:46.175084 containerd[1466]: 2025-08-13 07:12:46.155 [INFO][5673] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:46.175084 containerd[1466]: 2025-08-13 07:12:46.167 [WARNING][5673] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" HandleID="k8s-pod-network.3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:46.175084 containerd[1466]: 2025-08-13 07:12:46.167 [INFO][5673] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" HandleID="k8s-pod-network.3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:46.175084 containerd[1466]: 2025-08-13 07:12:46.170 [INFO][5673] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:46.175084 containerd[1466]: 2025-08-13 07:12:46.172 [INFO][5664] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Aug 13 07:12:46.175897 containerd[1466]: time="2025-08-13T07:12:46.175149986Z" level=info msg="TearDown network for sandbox \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\" successfully" Aug 13 07:12:46.175897 containerd[1466]: time="2025-08-13T07:12:46.175223900Z" level=info msg="StopPodSandbox for \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\" returns successfully" Aug 13 07:12:46.176006 containerd[1466]: time="2025-08-13T07:12:46.175892042Z" level=info msg="RemovePodSandbox for \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\"" Aug 13 07:12:46.176006 containerd[1466]: time="2025-08-13T07:12:46.175928809Z" level=info msg="Forcibly stopping sandbox \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\"" Aug 13 07:12:46.343211 sshd[5655]: Accepted publickey for core from 139.178.68.195 port 40102 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:12:46.344593 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:46.363191 systemd-logind[1439]: New session 10 of user core. Aug 13 07:12:46.370445 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:12:46.395204 containerd[1466]: 2025-08-13 07:12:46.263 [WARNING][5688] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68f698fa-855d-4911-b071-8b5d910e948d", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-e4a44e679a5617e9dd19.c.flatcar-212911.internal", ContainerID:"7384bffceba0887b75bb792b6718a9fc4c45e385d5c1ce2a1a13c83f55df62bf", Pod:"csi-node-driver-phmj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali35e20f4bfcc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:12:46.395204 containerd[1466]: 2025-08-13 07:12:46.265 [INFO][5688] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Aug 13 07:12:46.395204 containerd[1466]: 2025-08-13 07:12:46.265 [INFO][5688] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" iface="eth0" netns="" Aug 13 07:12:46.395204 containerd[1466]: 2025-08-13 07:12:46.265 [INFO][5688] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Aug 13 07:12:46.395204 containerd[1466]: 2025-08-13 07:12:46.265 [INFO][5688] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Aug 13 07:12:46.395204 containerd[1466]: 2025-08-13 07:12:46.349 [INFO][5695] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" HandleID="k8s-pod-network.3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:46.395204 containerd[1466]: 2025-08-13 07:12:46.349 [INFO][5695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:12:46.395204 containerd[1466]: 2025-08-13 07:12:46.349 [INFO][5695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:12:46.395204 containerd[1466]: 2025-08-13 07:12:46.379 [WARNING][5695] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" HandleID="k8s-pod-network.3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:46.395204 containerd[1466]: 2025-08-13 07:12:46.379 [INFO][5695] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" HandleID="k8s-pod-network.3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Workload="ci--4081--3--5--e4a44e679a5617e9dd19.c.flatcar--212911.internal-k8s-csi--node--driver--phmj8-eth0" Aug 13 07:12:46.395204 containerd[1466]: 2025-08-13 07:12:46.383 [INFO][5695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:12:46.395204 containerd[1466]: 2025-08-13 07:12:46.389 [INFO][5688] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726" Aug 13 07:12:46.395204 containerd[1466]: time="2025-08-13T07:12:46.395029860Z" level=info msg="TearDown network for sandbox \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\" successfully" Aug 13 07:12:46.407223 containerd[1466]: time="2025-08-13T07:12:46.405394190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:12:46.407223 containerd[1466]: time="2025-08-13T07:12:46.405509852Z" level=info msg="RemovePodSandbox \"3e3d4ad8e8a1084f8a746214c8e3c8975b27e5d628e039741b03f28b53c39726\" returns successfully" Aug 13 07:12:46.732954 sshd[5655]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:46.741052 systemd[1]: sshd@9-10.128.0.53:22-139.178.68.195:40102.service: Deactivated successfully. Aug 13 07:12:46.749283 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:12:46.753106 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:12:46.756233 systemd-logind[1439]: Removed session 10. Aug 13 07:12:51.324027 kubelet[2615]: I0813 07:12:51.323974 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:12:51.795812 systemd[1]: Started sshd@10-10.128.0.53:22-139.178.68.195:40980.service - OpenSSH per-connection server daemon (139.178.68.195:40980). Aug 13 07:12:52.112100 sshd[5722]: Accepted publickey for core from 139.178.68.195 port 40980 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:12:52.113066 sshd[5722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:52.126630 systemd-logind[1439]: New session 11 of user core. Aug 13 07:12:52.131449 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:12:52.489755 sshd[5722]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:52.497956 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:12:52.500726 systemd[1]: sshd@10-10.128.0.53:22-139.178.68.195:40980.service: Deactivated successfully. Aug 13 07:12:52.506903 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:12:52.509395 systemd-logind[1439]: Removed session 11. Aug 13 07:12:57.553343 systemd[1]: Started sshd@11-10.128.0.53:22-139.178.68.195:40992.service - OpenSSH per-connection server daemon (139.178.68.195:40992). Aug 13 07:12:57.861386 sshd[5740]: Accepted publickey for core from 139.178.68.195 port 40992 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:12:57.864081 sshd[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:57.874067 systemd-logind[1439]: New session 12 of user core. Aug 13 07:12:57.879708 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:12:58.217511 sshd[5740]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:58.226819 systemd[1]: sshd@11-10.128.0.53:22-139.178.68.195:40992.service: Deactivated successfully. Aug 13 07:12:58.232748 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:12:58.234459 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:12:58.237787 systemd-logind[1439]: Removed session 12. Aug 13 07:12:58.286669 systemd[1]: Started sshd@12-10.128.0.53:22-139.178.68.195:41008.service - OpenSSH per-connection server daemon (139.178.68.195:41008). Aug 13 07:12:58.607226 sshd[5754]: Accepted publickey for core from 139.178.68.195 port 41008 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:12:58.608592 sshd[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:58.617446 systemd-logind[1439]: New session 13 of user core. Aug 13 07:12:58.624432 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:12:59.084025 sshd[5754]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:59.091403 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:12:59.092823 systemd[1]: sshd@12-10.128.0.53:22-139.178.68.195:41008.service: Deactivated successfully. Aug 13 07:12:59.100798 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:12:59.109295 systemd-logind[1439]: Removed session 13. Aug 13 07:12:59.153644 systemd[1]: Started sshd@13-10.128.0.53:22-139.178.68.195:41010.service - OpenSSH per-connection server daemon (139.178.68.195:41010). Aug 13 07:12:59.468222 sshd[5765]: Accepted publickey for core from 139.178.68.195 port 41010 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:12:59.471919 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:59.480605 systemd-logind[1439]: New session 14 of user core. Aug 13 07:12:59.488850 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:12:59.817921 sshd[5765]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:59.828973 systemd[1]: sshd@13-10.128.0.53:22-139.178.68.195:41010.service: Deactivated successfully. Aug 13 07:12:59.830262 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:12:59.836226 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:12:59.838454 systemd-logind[1439]: Removed session 14. Aug 13 07:13:04.050251 systemd[1]: run-containerd-runc-k8s.io-dc94d374990ff3163120ab09cca4a26823b9b1bc4c0567d206b4bf1fa3c4219b-runc.z4Whxm.mount: Deactivated successfully. Aug 13 07:13:04.880099 systemd[1]: Started sshd@14-10.128.0.53:22-139.178.68.195:56612.service - OpenSSH per-connection server daemon (139.178.68.195:56612). Aug 13 07:13:05.200295 sshd[5845]: Accepted publickey for core from 139.178.68.195 port 56612 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:13:05.202474 sshd[5845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:05.215284 systemd-logind[1439]: New session 15 of user core. Aug 13 07:13:05.217434 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:13:05.544488 sshd[5845]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:05.552038 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:13:05.556069 systemd[1]: sshd@14-10.128.0.53:22-139.178.68.195:56612.service: Deactivated successfully. Aug 13 07:13:05.561996 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:13:05.564454 systemd-logind[1439]: Removed session 15. Aug 13 07:13:08.048880 systemd[1]: run-containerd-runc-k8s.io-856ab5704ccdccac94af8bb735e8f042940de31d2d4b5577d1943e18f03b5bea-runc.hProbw.mount: Deactivated successfully. Aug 13 07:13:10.603320 systemd[1]: Started sshd@15-10.128.0.53:22-139.178.68.195:45786.service - OpenSSH per-connection server daemon (139.178.68.195:45786). Aug 13 07:13:10.900926 sshd[5884]: Accepted publickey for core from 139.178.68.195 port 45786 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:13:10.902791 sshd[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:10.910744 systemd-logind[1439]: New session 16 of user core. Aug 13 07:13:10.918441 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:13:11.290785 sshd[5884]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:11.298672 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:13:11.299862 systemd[1]: sshd@15-10.128.0.53:22-139.178.68.195:45786.service: Deactivated successfully. Aug 13 07:13:11.307085 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:13:11.311415 systemd-logind[1439]: Removed session 16. Aug 13 07:13:16.350863 systemd[1]: Started sshd@16-10.128.0.53:22-139.178.68.195:45800.service - OpenSSH per-connection server daemon (139.178.68.195:45800). Aug 13 07:13:16.667391 sshd[5921]: Accepted publickey for core from 139.178.68.195 port 45800 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:13:16.668815 sshd[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:16.682257 systemd-logind[1439]: New session 17 of user core. Aug 13 07:13:16.687799 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:13:17.004560 sshd[5921]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:17.012463 systemd[1]: sshd@16-10.128.0.53:22-139.178.68.195:45800.service: Deactivated successfully. Aug 13 07:13:17.016780 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:13:17.021775 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:13:17.024157 systemd-logind[1439]: Removed session 17. Aug 13 07:13:22.063310 systemd[1]: Started sshd@17-10.128.0.53:22-139.178.68.195:58062.service - OpenSSH per-connection server daemon (139.178.68.195:58062). Aug 13 07:13:22.378292 sshd[5936]: Accepted publickey for core from 139.178.68.195 port 58062 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:13:22.380987 sshd[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:22.392228 systemd-logind[1439]: New session 18 of user core. Aug 13 07:13:22.399543 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:13:22.722047 sshd[5936]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:22.728034 systemd[1]: sshd@17-10.128.0.53:22-139.178.68.195:58062.service: Deactivated successfully. Aug 13 07:13:22.731711 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:13:22.732986 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:13:22.734940 systemd-logind[1439]: Removed session 18. Aug 13 07:13:22.783337 systemd[1]: Started sshd@18-10.128.0.53:22-139.178.68.195:58072.service - OpenSSH per-connection server daemon (139.178.68.195:58072). Aug 13 07:13:23.092147 sshd[5949]: Accepted publickey for core from 139.178.68.195 port 58072 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:13:23.093808 sshd[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:23.103651 systemd-logind[1439]: New session 19 of user core. Aug 13 07:13:23.112498 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:13:23.507501 sshd[5949]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:23.515615 systemd[1]: sshd@18-10.128.0.53:22-139.178.68.195:58072.service: Deactivated successfully. Aug 13 07:13:23.520050 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:13:23.524875 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:13:23.527894 systemd-logind[1439]: Removed session 19. Aug 13 07:13:23.568333 systemd[1]: Started sshd@19-10.128.0.53:22-139.178.68.195:58080.service - OpenSSH per-connection server daemon (139.178.68.195:58080). Aug 13 07:13:23.881312 sshd[5960]: Accepted publickey for core from 139.178.68.195 port 58080 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:13:23.885448 sshd[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:23.894979 systemd-logind[1439]: New session 20 of user core. Aug 13 07:13:23.902443 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:13:25.025353 sshd[5960]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:25.035419 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:13:25.037158 systemd[1]: sshd@19-10.128.0.53:22-139.178.68.195:58080.service: Deactivated successfully. Aug 13 07:13:25.044061 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:13:25.050266 systemd-logind[1439]: Removed session 20. Aug 13 07:13:25.090385 systemd[1]: Started sshd@20-10.128.0.53:22-139.178.68.195:58084.service - OpenSSH per-connection server daemon (139.178.68.195:58084). Aug 13 07:13:25.406317 sshd[5978]: Accepted publickey for core from 139.178.68.195 port 58084 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:13:25.412267 sshd[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:25.424526 systemd-logind[1439]: New session 21 of user core. Aug 13 07:13:25.434510 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:13:26.055626 sshd[5978]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:26.063289 systemd[1]: sshd@20-10.128.0.53:22-139.178.68.195:58084.service: Deactivated successfully. Aug 13 07:13:26.067933 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:13:26.070506 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:13:26.073387 systemd-logind[1439]: Removed session 21. Aug 13 07:13:26.113727 systemd[1]: Started sshd@21-10.128.0.53:22-139.178.68.195:58096.service - OpenSSH per-connection server daemon (139.178.68.195:58096). Aug 13 07:13:26.420644 sshd[5989]: Accepted publickey for core from 139.178.68.195 port 58096 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:13:26.424333 sshd[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:26.439336 systemd-logind[1439]: New session 22 of user core. Aug 13 07:13:26.446449 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:13:26.757119 sshd[5989]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:26.765482 systemd[1]: sshd@21-10.128.0.53:22-139.178.68.195:58096.service: Deactivated successfully. Aug 13 07:13:26.771089 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:13:26.774528 systemd-logind[1439]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:13:26.779578 systemd-logind[1439]: Removed session 22. Aug 13 07:13:31.816639 systemd[1]: Started sshd@22-10.128.0.53:22-139.178.68.195:55344.service - OpenSSH per-connection server daemon (139.178.68.195:55344). Aug 13 07:13:32.122985 sshd[6003]: Accepted publickey for core from 139.178.68.195 port 55344 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:13:32.125980 sshd[6003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:32.135141 systemd-logind[1439]: New session 23 of user core. Aug 13 07:13:32.141410 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:13:32.473586 sshd[6003]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:32.483605 systemd-logind[1439]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:13:32.484646 systemd[1]: sshd@22-10.128.0.53:22-139.178.68.195:55344.service: Deactivated successfully. Aug 13 07:13:32.490220 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:13:32.495471 systemd-logind[1439]: Removed session 23. Aug 13 07:13:37.535332 systemd[1]: Started sshd@23-10.128.0.53:22-139.178.68.195:55350.service - OpenSSH per-connection server daemon (139.178.68.195:55350). Aug 13 07:13:37.842657 sshd[6059]: Accepted publickey for core from 139.178.68.195 port 55350 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:13:37.843747 sshd[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:37.857263 systemd-logind[1439]: New session 24 of user core. Aug 13 07:13:37.864440 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:13:38.200501 sshd[6059]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:38.210656 systemd[1]: sshd@23-10.128.0.53:22-139.178.68.195:55350.service: Deactivated successfully. Aug 13 07:13:38.211282 systemd-logind[1439]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:13:38.216153 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:13:38.217962 systemd-logind[1439]: Removed session 24. Aug 13 07:13:43.260617 systemd[1]: Started sshd@24-10.128.0.53:22-139.178.68.195:48672.service - OpenSSH per-connection server daemon (139.178.68.195:48672). Aug 13 07:13:43.601365 sshd[6074]: Accepted publickey for core from 139.178.68.195 port 48672 ssh2: RSA SHA256:IOAzRhpk7klwxeHltvhiKPPLBfjdcadVmqfhkAQU/hs Aug 13 07:13:43.602514 sshd[6074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:43.613101 systemd-logind[1439]: New session 25 of user core. Aug 13 07:13:43.619456 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:13:43.913550 sshd[6074]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:43.921509 systemd[1]: sshd@24-10.128.0.53:22-139.178.68.195:48672.service: Deactivated successfully. Aug 13 07:13:43.926994 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:13:43.933050 systemd-logind[1439]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:13:43.935231 systemd-logind[1439]: Removed session 25.