Jul 6 23:54:58.237086 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:54:58.237145 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:58.237155 kernel: BIOS-provided physical RAM map: Jul 6 23:54:58.237178 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 6 23:54:58.237193 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 6 23:54:58.237200 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jul 6 23:54:58.237208 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jul 6 23:54:58.237217 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jul 6 23:54:58.237224 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 6 23:54:58.237231 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 6 23:54:58.237240 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 6 23:54:58.237247 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 6 23:54:58.237253 kernel: printk: bootconsole [earlyser0] enabled Jul 6 23:54:58.237259 kernel: NX (Execute Disable) protection: active Jul 6 23:54:58.237269 kernel: APIC: Static calls initialized Jul 6 23:54:58.237280 kernel: efi: EFI v2.7 by Microsoft Jul 6 23:54:58.237287 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 Jul 6 23:54:58.237294 kernel: SMBIOS 3.1.0 present. Jul 6 23:54:58.237301 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jul 6 23:54:58.237308 kernel: Hypervisor detected: Microsoft Hyper-V Jul 6 23:54:58.237316 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jul 6 23:54:58.237326 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jul 6 23:54:58.237333 kernel: Hyper-V: Nested features: 0x1e0101 Jul 6 23:54:58.237339 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 6 23:54:58.237352 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 6 23:54:58.237359 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 6 23:54:58.237367 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 6 23:54:58.237374 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jul 6 23:54:58.237382 kernel: tsc: Detected 2593.907 MHz processor Jul 6 23:54:58.237391 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:54:58.237400 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:54:58.237407 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jul 6 23:54:58.237417 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 6 23:54:58.237427 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:54:58.237435 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jul 6 23:54:58.237442 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jul 6 23:54:58.237449 kernel: Using GB pages for direct mapping Jul 6 23:54:58.237456 kernel: Secure boot disabled Jul 6 23:54:58.237466 kernel: ACPI: Early table checksum verification disabled Jul 6 23:54:58.237474 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 6 23:54:58.237489 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237502 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237511 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 6 23:54:58.237519 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 6 23:54:58.237530 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237538 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237545 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237558 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237566 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237576 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237584 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237594 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 6 23:54:58.237602 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jul 6 23:54:58.237610 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 6 23:54:58.237630 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 6 23:54:58.237643 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 6 23:54:58.237651 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 6 23:54:58.237662 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jul 6 23:54:58.237670 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jul 6 23:54:58.237680 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 6 23:54:58.237977 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jul 6 23:54:58.238278 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:54:58.238478 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 6 23:54:58.240686 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 6 23:54:58.240719 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jul 6 23:54:58.240732 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jul 6 23:54:58.240745 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 6 23:54:58.240757 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 6 23:54:58.240768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 6 23:54:58.240779 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 6 23:54:58.240792 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 6 23:54:58.240805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 6 23:54:58.240818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 6 23:54:58.240838 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 6 23:54:58.240852 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 6 23:54:58.240866 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jul 6 23:54:58.240880 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jul 6 23:54:58.240893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jul 6 23:54:58.240907 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jul 6 23:54:58.240922 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jul 6 23:54:58.240936 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jul 6 23:54:58.240951 kernel: Zone ranges: Jul 6 23:54:58.240969 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:54:58.240983 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 6 23:54:58.240996 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 6 23:54:58.241009 kernel: Movable zone start for each node Jul 6 23:54:58.241024 kernel: Early memory node ranges Jul 6 23:54:58.241037 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 6 23:54:58.241052 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jul 6 23:54:58.241066 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 6 23:54:58.241080 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 6 23:54:58.241097 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 6 23:54:58.241111 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:54:58.241125 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 6 23:54:58.241138 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jul 6 23:54:58.241153 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 6 23:54:58.241167 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jul 6 23:54:58.241212 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:54:58.241227 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:54:58.241241 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:54:58.241260 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 6 23:54:58.241273 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:54:58.241288 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 6 23:54:58.241302 kernel: Booting paravirtualized kernel on Hyper-V Jul 6 23:54:58.241316 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:54:58.241330 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:54:58.241344 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:54:58.241358 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:54:58.241371 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:54:58.241388 kernel: Hyper-V: PV spinlocks enabled Jul 6 23:54:58.241403 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:54:58.241419 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:58.241434 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:54:58.241448 kernel: random: crng init done Jul 6 23:54:58.241461 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 6 23:54:58.241474 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:54:58.241489 kernel: Fallback order for Node 0: 0 Jul 6 23:54:58.241507 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jul 6 23:54:58.241532 kernel: Policy zone: Normal Jul 6 23:54:58.241549 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:54:58.241563 kernel: software IO TLB: area num 2. Jul 6 23:54:58.241578 kernel: Memory: 8077072K/8387460K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 310128K reserved, 0K cma-reserved) Jul 6 23:54:58.241593 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:54:58.241608 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:54:58.241634 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:54:58.241646 kernel: Dynamic Preempt: voluntary Jul 6 23:54:58.241659 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:54:58.241673 kernel: rcu: RCU event tracing is enabled. Jul 6 23:54:58.241690 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:54:58.241733 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:54:58.241750 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:54:58.241764 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:54:58.241777 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:54:58.241795 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:54:58.241809 kernel: Using NULL legacy PIC Jul 6 23:54:58.241822 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 6 23:54:58.241836 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:54:58.241850 kernel: Console: colour dummy device 80x25 Jul 6 23:54:58.241863 kernel: printk: console [tty1] enabled Jul 6 23:54:58.241876 kernel: printk: console [ttyS0] enabled Jul 6 23:54:58.241890 kernel: printk: bootconsole [earlyser0] disabled Jul 6 23:54:58.241903 kernel: ACPI: Core revision 20230628 Jul 6 23:54:58.241916 kernel: Failed to register legacy timer interrupt Jul 6 23:54:58.241933 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:54:58.241946 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 6 23:54:58.241963 kernel: Hyper-V: Using IPI hypercalls Jul 6 23:54:58.241978 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 6 23:54:58.241994 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 6 23:54:58.242010 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 6 23:54:58.242027 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 6 23:54:58.242042 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 6 23:54:58.242058 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 6 23:54:58.242078 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jul 6 23:54:58.242094 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 6 23:54:58.242109 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 6 23:54:58.242125 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:54:58.242141 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:54:58.242157 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:54:58.242172 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 6 23:54:58.242188 kernel: RETBleed: Vulnerable Jul 6 23:54:58.242236 kernel: Speculative Store Bypass: Vulnerable Jul 6 23:54:58.242261 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:54:58.242277 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:54:58.242292 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:54:58.242307 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:54:58.242321 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:54:58.242335 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:54:58.242349 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 6 23:54:58.242362 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 6 23:54:58.242376 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 6 23:54:58.242390 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:54:58.242403 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 6 23:54:58.242421 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 6 23:54:58.242434 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 6 23:54:58.242448 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jul 6 23:54:58.242461 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:54:58.242474 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:54:58.242487 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:54:58.242500 kernel: landlock: Up and running. Jul 6 23:54:58.242513 kernel: SELinux: Initializing. Jul 6 23:54:58.242526 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:54:58.242539 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:54:58.242554 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 6 23:54:58.242568 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:58.242587 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:58.242601 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:58.244658 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 6 23:54:58.244679 kernel: signal: max sigframe size: 3632 Jul 6 23:54:58.263006 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:54:58.263257 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:54:58.263444 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:54:58.263608 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:54:58.264008 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:54:58.264026 kernel: .... node #0, CPUs: #1 Jul 6 23:54:58.264036 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 6 23:54:58.264046 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 6 23:54:58.264054 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:54:58.264062 kernel: smpboot: Max logical packages: 1 Jul 6 23:54:58.264071 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jul 6 23:54:58.264079 kernel: devtmpfs: initialized Jul 6 23:54:58.264087 kernel: x86/mm: Memory block size: 128MB Jul 6 23:54:58.264099 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 6 23:54:58.264107 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:54:58.264115 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:54:58.264123 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:54:58.264132 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:54:58.264140 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:54:58.264148 kernel: audit: type=2000 audit(1751846096.030:1): state=initialized audit_enabled=0 res=1 Jul 6 23:54:58.264156 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:54:58.264164 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:54:58.264175 kernel: cpuidle: using governor menu Jul 6 23:54:58.264183 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:54:58.264191 kernel: dca service started, version 1.12.1 Jul 6 23:54:58.264199 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jul 6 23:54:58.264207 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:54:58.264216 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:54:58.264224 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:54:58.264232 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:54:58.264240 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:54:58.264250 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:54:58.264259 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:54:58.264267 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:54:58.264275 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:54:58.264283 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:54:58.264291 kernel: ACPI: Interpreter enabled Jul 6 23:54:58.264299 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:54:58.264307 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:54:58.264315 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:54:58.264325 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 6 23:54:58.264334 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 6 23:54:58.264342 kernel: iommu: Default domain type: Translated Jul 6 23:54:58.264350 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:54:58.264358 kernel: efivars: Registered efivars operations Jul 6 23:54:58.264366 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:54:58.264374 kernel: PCI: System does not support PCI Jul 6 23:54:58.264382 kernel: vgaarb: loaded Jul 6 23:54:58.264390 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jul 6 23:54:58.264400 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:54:58.264408 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:54:58.264416 kernel: pnp: PnP ACPI init Jul 6 23:54:58.264424 kernel: pnp: PnP ACPI: found 3 devices Jul 6 23:54:58.264433 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:54:58.264441 kernel: NET: Registered PF_INET protocol family Jul 6 23:54:58.264449 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:54:58.264457 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 6 23:54:58.264465 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:54:58.264476 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:54:58.264484 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 6 23:54:58.264492 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 6 23:54:58.264500 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 6 23:54:58.264508 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 6 23:54:58.264516 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:54:58.264524 kernel: NET: Registered PF_XDP protocol family Jul 6 23:54:58.264531 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:54:58.264540 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 6 23:54:58.264550 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Jul 6 23:54:58.264558 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:54:58.264566 kernel: Initialise system trusted keyrings Jul 6 23:54:58.264574 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 6 23:54:58.264582 kernel: Key type asymmetric registered Jul 6 23:54:58.264590 kernel: Asymmetric key parser 'x509' registered Jul 6 23:54:58.264598 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:54:58.264606 kernel: io scheduler mq-deadline registered Jul 6 23:54:58.264623 kernel: io scheduler kyber registered Jul 6 23:54:58.264634 kernel: io scheduler bfq registered Jul 6 23:54:58.264642 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:54:58.264650 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:54:58.264658 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:54:58.264666 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 6 23:54:58.264674 kernel: i8042: PNP: No PS/2 controller found. Jul 6 23:54:58.264855 kernel: rtc_cmos 00:02: registered as rtc0 Jul 6 23:54:58.264942 kernel: rtc_cmos 00:02: setting system clock to 2025-07-06T23:54:57 UTC (1751846097) Jul 6 23:54:58.265022 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 6 23:54:58.265033 kernel: intel_pstate: CPU model not supported Jul 6 23:54:58.265042 kernel: efifb: probing for efifb Jul 6 23:54:58.265051 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 6 23:54:58.265059 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 6 23:54:58.265067 kernel: efifb: scrolling: redraw Jul 6 23:54:58.265075 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:54:58.265083 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:54:58.265094 kernel: fb0: EFI VGA frame buffer device Jul 6 23:54:58.265103 kernel: pstore: Using crash dump compression: deflate Jul 6 23:54:58.265111 kernel: pstore: Registered efi_pstore as persistent store backend Jul 6 23:54:58.265119 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:54:58.265127 kernel: Segment Routing with IPv6 Jul 6 23:54:58.265134 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:54:58.265143 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:54:58.265151 kernel: Key type dns_resolver registered Jul 6 23:54:58.265158 kernel: IPI shorthand broadcast: enabled Jul 6 23:54:58.265167 kernel: sched_clock: Marking stable (1004002600, 54684000)->(1336223500, -277536900) Jul 6 23:54:58.265177 kernel: registered taskstats version 1 Jul 6 23:54:58.265185 kernel: Loading compiled-in X.509 certificates Jul 6 23:54:58.265193 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:54:58.265201 kernel: Key type .fscrypt registered Jul 6 23:54:58.265209 kernel: Key type fscrypt-provisioning registered Jul 6 23:54:58.265217 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:54:58.265225 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:54:58.265233 kernel: ima: No architecture policies found Jul 6 23:54:58.265244 kernel: clk: Disabling unused clocks Jul 6 23:54:58.265252 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:54:58.265260 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:54:58.265268 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:54:58.265276 kernel: Run /init as init process Jul 6 23:54:58.265285 kernel: with arguments: Jul 6 23:54:58.265292 kernel: /init Jul 6 23:54:58.265300 kernel: with environment: Jul 6 23:54:58.265308 kernel: HOME=/ Jul 6 23:54:58.265316 kernel: TERM=linux Jul 6 23:54:58.265326 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:54:58.265337 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:54:58.265348 systemd[1]: Detected virtualization microsoft. Jul 6 23:54:58.265356 systemd[1]: Detected architecture x86-64. Jul 6 23:54:58.265364 systemd[1]: Running in initrd. Jul 6 23:54:58.265373 systemd[1]: No hostname configured, using default hostname. Jul 6 23:54:58.265381 systemd[1]: Hostname set to . Jul 6 23:54:58.265392 systemd[1]: Initializing machine ID from random generator. Jul 6 23:54:58.265401 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:54:58.265409 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:54:58.265418 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:54:58.265427 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:54:58.265436 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:54:58.265445 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:54:58.265453 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:54:58.265466 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:54:58.265475 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:54:58.265483 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:54:58.265492 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:54:58.265500 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:54:58.265509 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:54:58.265518 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:54:58.265529 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:54:58.265537 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:54:58.265546 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:54:58.265554 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:54:58.265563 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:54:58.265571 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:54:58.265580 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:54:58.265589 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:54:58.265599 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:54:58.265608 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:54:58.265627 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:54:58.265636 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:54:58.265644 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:54:58.265653 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:54:58.265661 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:54:58.265692 systemd-journald[176]: Collecting audit messages is disabled. Jul 6 23:54:58.265716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:58.265724 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:54:58.265733 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:54:58.265742 systemd-journald[176]: Journal started Jul 6 23:54:58.265764 systemd-journald[176]: Runtime Journal (/run/log/journal/38a3c5f3fcc7435b9e55252e0596bf2e) is 8.0M, max 158.8M, 150.8M free. Jul 6 23:54:58.273779 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:54:58.276765 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:54:58.280923 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:58.288218 systemd-modules-load[177]: Inserted module 'overlay' Jul 6 23:54:58.298817 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:58.320857 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:54:58.339271 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:54:58.342240 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:54:58.346982 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:54:58.349474 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:54:58.364254 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:54:58.368443 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:54:58.376153 kernel: Bridge firewalling registered Jul 6 23:54:58.376248 systemd-modules-load[177]: Inserted module 'br_netfilter' Jul 6 23:54:58.376435 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:58.381760 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:54:58.388140 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:54:58.400813 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:54:58.414552 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:54:58.420792 dracut-cmdline[208]: dracut-dracut-053 Jul 6 23:54:58.423895 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:58.435962 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:54:58.491366 systemd-resolved[222]: Positive Trust Anchors: Jul 6 23:54:58.494152 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:54:58.494209 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:54:58.517608 systemd-resolved[222]: Defaulting to hostname 'linux'. Jul 6 23:54:58.521294 kernel: SCSI subsystem initialized Jul 6 23:54:58.518964 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:54:58.526553 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:54:58.535633 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:54:58.547640 kernel: iscsi: registered transport (tcp) Jul 6 23:54:58.585669 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:54:58.585741 kernel: QLogic iSCSI HBA Driver Jul 6 23:54:58.623199 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:54:58.629803 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:54:58.657310 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:54:58.657420 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:54:58.660555 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:54:58.701646 kernel: raid6: avx512x4 gen() 18043 MB/s Jul 6 23:54:58.720630 kernel: raid6: avx512x2 gen() 17932 MB/s Jul 6 23:54:58.739624 kernel: raid6: avx512x1 gen() 18007 MB/s Jul 6 23:54:58.759638 kernel: raid6: avx2x4 gen() 17964 MB/s Jul 6 23:54:58.778649 kernel: raid6: avx2x2 gen() 17660 MB/s Jul 6 23:54:58.798794 kernel: raid6: avx2x1 gen() 13665 MB/s Jul 6 23:54:58.798868 kernel: raid6: using algorithm avx512x4 gen() 18043 MB/s Jul 6 23:54:58.826103 kernel: raid6: .... xor() 5425 MB/s, rmw enabled Jul 6 23:54:58.826174 kernel: raid6: using avx512x2 recovery algorithm Jul 6 23:54:58.851643 kernel: xor: automatically using best checksumming function avx Jul 6 23:54:58.997643 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:54:59.007784 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:54:59.020889 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:54:59.033892 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jul 6 23:54:59.038530 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:54:59.060853 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:54:59.074122 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jul 6 23:54:59.102594 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:54:59.113791 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:54:59.158243 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:54:59.175657 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:54:59.217062 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:54:59.225062 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:54:59.229978 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:54:59.233294 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:54:59.239161 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:54:59.250779 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:54:59.258017 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:54:59.258061 kernel: AES CTR mode by8 optimization enabled Jul 6 23:54:59.266637 kernel: hv_vmbus: Vmbus version:5.2 Jul 6 23:54:59.292637 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 6 23:54:59.303884 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:54:59.322780 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 6 23:54:59.322811 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 6 23:54:59.322831 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 6 23:54:59.341669 kernel: PTP clock support registered Jul 6 23:54:59.352200 kernel: hv_vmbus: registering driver hv_netvsc Jul 6 23:54:59.352253 kernel: hv_utils: Registering HyperV Utility Driver Jul 6 23:54:59.353638 kernel: hv_vmbus: registering driver hv_utils Jul 6 23:54:59.357789 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:54:59.366355 kernel: hv_utils: Heartbeat IC version 3.0 Jul 6 23:54:59.366383 kernel: hv_utils: Shutdown IC version 3.2 Jul 6 23:54:59.366401 kernel: hv_utils: TimeSync IC version 4.0 Jul 6 23:54:59.358039 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:59.512540 systemd-resolved[222]: Clock change detected. Flushing caches. Jul 6 23:54:59.515868 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:59.532742 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:54:59.518468 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:54:59.518674 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:59.521423 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:59.544806 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:59.556484 kernel: hv_vmbus: registering driver hid_hyperv Jul 6 23:54:59.565648 kernel: hv_vmbus: registering driver hv_storvsc Jul 6 23:54:59.565712 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 6 23:54:59.565737 kernel: scsi host0: storvsc_host_t Jul 6 23:54:59.571247 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 6 23:54:59.571446 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 6 23:54:59.574956 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:54:59.577068 kernel: scsi host1: storvsc_host_t Jul 6 23:54:59.575074 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:59.583511 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 6 23:54:59.596980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:59.613346 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 6 23:54:59.613739 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:54:59.618485 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 6 23:54:59.623927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:59.637615 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 6 23:54:59.637881 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 6 23:54:59.641539 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:54:59.644649 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 6 23:54:59.644852 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 6 23:54:59.645846 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:59.656072 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:54:59.656105 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:54:59.679232 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:59.731276 kernel: hv_netvsc 7ced8d2c-eaf4-7ced-8d2c-eaf47ced8d2c eth0: VF slot 1 added Jul 6 23:54:59.740488 kernel: hv_vmbus: registering driver hv_pci Jul 6 23:54:59.749248 kernel: hv_pci ba63851c-f7db-47d9-ba80-959063118e13: PCI VMBus probing: Using version 0x10004 Jul 6 23:54:59.749491 kernel: hv_pci ba63851c-f7db-47d9-ba80-959063118e13: PCI host bridge to bus f7db:00 Jul 6 23:54:59.753665 kernel: pci_bus f7db:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jul 6 23:54:59.756647 kernel: pci_bus f7db:00: No busn resource found for root bus, will use [bus 00-ff] Jul 6 23:54:59.764720 kernel: pci f7db:00:02.0: [15b3:1016] type 00 class 0x020000 Jul 6 23:54:59.769523 kernel: pci f7db:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 6 23:54:59.775712 kernel: pci f7db:00:02.0: enabling Extended Tags Jul 6 23:54:59.795499 kernel: pci f7db:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f7db:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 6 23:54:59.802519 kernel: pci_bus f7db:00: busn_res: [bus 00-ff] end is updated to 00 Jul 6 23:54:59.802809 kernel: pci f7db:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 6 23:54:59.997410 kernel: mlx5_core f7db:00:02.0: enabling device (0000 -> 0002) Jul 6 23:54:59.997675 kernel: mlx5_core f7db:00:02.0: firmware version: 14.30.5000 Jul 6 23:55:00.103626 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 6 23:55:00.188496 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (461) Jul 6 23:55:00.206962 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:55:00.218162 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (446) Jul 6 23:55:00.235735 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 6 23:55:00.239353 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 6 23:55:00.264425 kernel: hv_netvsc 7ced8d2c-eaf4-7ced-8d2c-eaf47ced8d2c eth0: VF registering: eth1 Jul 6 23:55:00.264733 kernel: mlx5_core f7db:00:02.0 eth1: joined to eth0 Jul 6 23:55:00.262279 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:55:00.278746 kernel: mlx5_core f7db:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 6 23:55:00.298489 kernel: mlx5_core f7db:00:02.0 enP63451s1: renamed from eth1 Jul 6 23:55:00.492306 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 6 23:55:01.295335 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:01.295406 disk-uuid[601]: The operation has completed successfully. Jul 6 23:55:01.387397 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:55:01.387527 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:55:01.401677 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:55:01.425732 sh[718]: Success Jul 6 23:55:01.464586 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:55:01.658494 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:55:01.672238 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:55:01.687721 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:55:01.721449 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:55:01.721570 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:01.725798 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:55:01.729565 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:55:01.733033 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:55:02.006504 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:55:02.010343 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:55:02.032874 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:55:02.042208 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:55:02.063414 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:02.063525 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:02.066400 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:02.118115 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:02.127341 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:55:02.133488 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:02.136142 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:02.150645 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:02.155009 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:55:02.171788 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:55:02.177530 systemd-networkd[900]: lo: Link UP Jul 6 23:55:02.177533 systemd-networkd[900]: lo: Gained carrier Jul 6 23:55:02.179753 systemd-networkd[900]: Enumeration completed Jul 6 23:55:02.180307 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:02.182229 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:02.182232 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:02.184877 systemd[1]: Reached target network.target - Network. Jul 6 23:55:02.269488 kernel: mlx5_core f7db:00:02.0 enP63451s1: Link up Jul 6 23:55:02.302482 kernel: hv_netvsc 7ced8d2c-eaf4-7ced-8d2c-eaf47ced8d2c eth0: Data path switched to VF: enP63451s1 Jul 6 23:55:02.302443 systemd-networkd[900]: enP63451s1: Link UP Jul 6 23:55:02.302798 systemd-networkd[900]: eth0: Link UP Jul 6 23:55:02.302979 systemd-networkd[900]: eth0: Gained carrier Jul 6 23:55:02.302990 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:02.322124 systemd-networkd[900]: enP63451s1: Gained carrier Jul 6 23:55:02.354573 systemd-networkd[900]: eth0: DHCPv4 address 10.200.8.46/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 6 23:55:02.964005 ignition[902]: Ignition 2.19.0 Jul 6 23:55:02.964019 ignition[902]: Stage: fetch-offline Jul 6 23:55:02.964067 ignition[902]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:02.964079 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:02.964207 ignition[902]: parsed url from cmdline: "" Jul 6 23:55:02.964212 ignition[902]: no config URL provided Jul 6 23:55:02.964219 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:02.964230 ignition[902]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:02.964239 ignition[902]: failed to fetch config: resource requires networking Jul 6 23:55:02.986894 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:02.972083 ignition[902]: Ignition finished successfully Jul 6 23:55:03.013795 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:55:03.035682 ignition[910]: Ignition 2.19.0 Jul 6 23:55:03.035693 ignition[910]: Stage: fetch Jul 6 23:55:03.035943 ignition[910]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:03.035958 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:03.036922 ignition[910]: parsed url from cmdline: "" Jul 6 23:55:03.037015 ignition[910]: no config URL provided Jul 6 23:55:03.037084 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:03.037305 ignition[910]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:03.037745 ignition[910]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 6 23:55:03.128674 ignition[910]: GET result: OK Jul 6 23:55:03.128821 ignition[910]: config has been read from IMDS userdata Jul 6 23:55:03.128866 ignition[910]: parsing config with SHA512: 209716e91e32fc4960d46b4e7d9a040341a83a25377b086e960d997a0e3d45fe0e2ba22c2e4e40e679189840aa262921dfe6bb8c3b3f9c3e39bc85d6776df631 Jul 6 23:55:03.136593 unknown[910]: fetched base config from "system" Jul 6 23:55:03.137376 ignition[910]: fetch: fetch complete Jul 6 23:55:03.136610 unknown[910]: fetched base config from "system" Jul 6 23:55:03.137385 ignition[910]: fetch: fetch passed Jul 6 23:55:03.136626 unknown[910]: fetched user config from "azure" Jul 6 23:55:03.137452 ignition[910]: Ignition finished successfully Jul 6 23:55:03.154052 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:55:03.167774 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:55:03.186923 ignition[916]: Ignition 2.19.0 Jul 6 23:55:03.186935 ignition[916]: Stage: kargs Jul 6 23:55:03.187175 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:03.187189 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:03.188095 ignition[916]: kargs: kargs passed Jul 6 23:55:03.188144 ignition[916]: Ignition finished successfully Jul 6 23:55:03.202260 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:55:03.213877 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:55:03.232113 ignition[922]: Ignition 2.19.0 Jul 6 23:55:03.232125 ignition[922]: Stage: disks Jul 6 23:55:03.235029 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:55:03.232358 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:03.241552 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:03.232372 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:03.250001 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:55:03.233283 ignition[922]: disks: disks passed Jul 6 23:55:03.256833 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:03.233331 ignition[922]: Ignition finished successfully Jul 6 23:55:03.279021 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:03.284406 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:03.303794 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:55:03.366162 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 6 23:55:03.374431 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:55:03.397610 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:55:03.513579 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:55:03.514207 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:55:03.516916 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:55:03.550656 systemd-networkd[900]: eth0: Gained IPv6LL Jul 6 23:55:03.554722 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:03.561686 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:55:03.573489 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Jul 6 23:55:03.582067 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:03.582169 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:03.585007 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:03.591560 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:03.591346 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:55:03.593583 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:55:03.593630 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:03.620140 systemd-networkd[900]: enP63451s1: Gained IPv6LL Jul 6 23:55:03.624240 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:03.631983 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:55:03.645761 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:55:04.202505 coreos-metadata[944]: Jul 06 23:55:04.202 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:55:04.211592 coreos-metadata[944]: Jul 06 23:55:04.211 INFO Fetch successful Jul 6 23:55:04.215361 coreos-metadata[944]: Jul 06 23:55:04.211 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:55:04.224391 coreos-metadata[944]: Jul 06 23:55:04.222 INFO Fetch successful Jul 6 23:55:04.237791 coreos-metadata[944]: Jul 06 23:55:04.237 INFO wrote hostname ci-4081.3.4-a-6a836f1a00 to /sysroot/etc/hostname Jul 6 23:55:04.240568 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:55:04.271103 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:55:04.348180 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:55:04.369421 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:55:04.378918 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:55:05.204361 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:05.218712 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:55:05.233072 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:55:05.248326 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:55:05.261558 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:05.277909 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:55:05.299455 ignition[1062]: INFO : Ignition 2.19.0 Jul 6 23:55:05.299455 ignition[1062]: INFO : Stage: mount Jul 6 23:55:05.304703 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:05.304703 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:05.304703 ignition[1062]: INFO : mount: mount passed Jul 6 23:55:05.304703 ignition[1062]: INFO : Ignition finished successfully Jul 6 23:55:05.302679 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:55:05.329692 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:55:05.339147 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:05.355490 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Jul 6 23:55:05.364064 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:05.364160 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:05.367066 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:05.372892 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:05.374690 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:05.410551 ignition[1089]: INFO : Ignition 2.19.0 Jul 6 23:55:05.410551 ignition[1089]: INFO : Stage: files Jul 6 23:55:05.418388 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:05.418388 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:05.418388 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:55:05.484029 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:55:05.484029 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:55:05.541756 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:55:05.549249 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:55:05.549249 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:55:05.542273 unknown[1089]: wrote ssh authorized keys file for user: core Jul 6 23:55:05.569073 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:55:05.574272 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 6 23:55:05.645066 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:55:05.809686 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:55:05.809686 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:05.824414 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:05.824414 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:05.824414 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:05.824414 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:05.824414 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:05.824414 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:05.860176 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:05.860176 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:05.860176 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:05.860176 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:05.860176 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:05.860176 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:05.905039 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 6 23:55:06.709531 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:55:07.033594 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:07.033594 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:55:07.062165 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:07.074920 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:07.074920 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:55:07.074920 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:07.074920 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:07.074920 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:07.074920 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:07.074920 ignition[1089]: INFO : files: files passed Jul 6 23:55:07.074920 ignition[1089]: INFO : Ignition finished successfully Jul 6 23:55:07.064077 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:55:07.092750 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:55:07.100647 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:55:07.105066 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:55:07.105165 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:55:07.138696 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:07.138696 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:07.155134 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:07.140793 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:07.148837 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:55:07.165602 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:55:07.190891 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:55:07.191019 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:55:07.199332 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:55:07.202131 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:55:07.207148 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:55:07.214713 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:55:07.228373 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:07.239627 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:55:07.250211 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:07.250423 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:07.251576 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:55:07.251978 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:55:07.252114 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:07.253144 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:55:07.253697 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:55:07.253989 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:55:07.254432 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:07.255239 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:07.255643 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:55:07.256032 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:07.256540 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:55:07.256903 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:55:07.257351 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:55:07.257712 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:55:07.257858 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:07.258924 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:07.259333 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:07.260068 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:55:07.297441 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:07.315176 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:55:07.315336 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:07.429367 ignition[1142]: INFO : Ignition 2.19.0 Jul 6 23:55:07.429367 ignition[1142]: INFO : Stage: umount Jul 6 23:55:07.429367 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:07.429367 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:07.429367 ignition[1142]: INFO : umount: umount passed Jul 6 23:55:07.429367 ignition[1142]: INFO : Ignition finished successfully Jul 6 23:55:07.326259 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:55:07.327844 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:07.336255 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:55:07.338603 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:55:07.343304 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:55:07.345739 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:55:07.375572 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:55:07.389805 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:55:07.390121 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:07.422266 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:55:07.429397 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:55:07.430726 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:07.434868 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:55:07.435035 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:55:07.442277 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:55:07.443511 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:55:07.462668 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:55:07.463017 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:55:07.467111 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:55:07.467172 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:55:07.467397 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:55:07.467432 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:55:07.468153 systemd[1]: Stopped target network.target - Network. Jul 6 23:55:07.471127 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:55:07.471182 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:07.471550 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:55:07.471906 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:55:07.498121 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:07.502101 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:55:07.504509 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:55:07.507307 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:55:07.507360 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:55:07.523218 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:55:07.523276 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:55:07.528855 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:55:07.528930 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:55:07.533997 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:55:07.534061 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:55:07.541315 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:55:07.546704 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:55:07.552957 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:55:07.553560 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:55:07.553657 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:55:07.561534 systemd-networkd[900]: eth0: DHCPv6 lease lost Jul 6 23:55:07.574693 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:55:07.574814 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:55:07.593406 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:55:07.593524 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:55:07.603799 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:55:07.603868 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:07.623893 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:55:07.632794 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:55:07.632888 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:07.636132 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:55:07.636197 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:07.640900 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:55:07.640962 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:07.652679 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:55:07.655002 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:07.662305 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:07.687869 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:55:07.688023 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:07.693556 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:55:07.693645 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:07.698610 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:55:07.698658 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:07.724939 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:55:07.725036 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:55:07.733930 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:55:07.734017 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:55:07.743504 kernel: hv_netvsc 7ced8d2c-eaf4-7ced-8d2c-eaf47ced8d2c eth0: Data path switched from VF: enP63451s1 Jul 6 23:55:07.744976 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:07.745066 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:07.758659 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:55:07.764852 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:55:07.764953 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:07.773988 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:07.774064 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:07.788010 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:55:07.788155 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:55:07.795694 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:55:07.795818 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:55:08.062776 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:55:08.065838 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:55:08.078000 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:55:08.081493 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:55:08.081596 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:08.099659 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:55:08.457233 systemd[1]: Switching root. Jul 6 23:55:08.585393 systemd-journald[176]: Journal stopped Jul 6 23:54:58.237086 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:54:58.237145 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:58.237155 kernel: BIOS-provided physical RAM map: Jul 6 23:54:58.237178 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 6 23:54:58.237193 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 6 23:54:58.237200 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jul 6 23:54:58.237208 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jul 6 23:54:58.237217 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jul 6 23:54:58.237224 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 6 23:54:58.237231 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 6 23:54:58.237240 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 6 23:54:58.237247 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 6 23:54:58.237253 kernel: printk: bootconsole [earlyser0] enabled Jul 6 23:54:58.237259 kernel: NX (Execute Disable) protection: active Jul 6 23:54:58.237269 kernel: APIC: Static calls initialized Jul 6 23:54:58.237280 kernel: efi: EFI v2.7 by Microsoft Jul 6 23:54:58.237287 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 Jul 6 23:54:58.237294 kernel: SMBIOS 3.1.0 present. Jul 6 23:54:58.237301 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jul 6 23:54:58.237308 kernel: Hypervisor detected: Microsoft Hyper-V Jul 6 23:54:58.237316 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jul 6 23:54:58.237326 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jul 6 23:54:58.237333 kernel: Hyper-V: Nested features: 0x1e0101 Jul 6 23:54:58.237339 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 6 23:54:58.237352 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 6 23:54:58.237359 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 6 23:54:58.237367 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 6 23:54:58.237374 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jul 6 23:54:58.237382 kernel: tsc: Detected 2593.907 MHz processor Jul 6 23:54:58.237391 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:54:58.237400 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:54:58.237407 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jul 6 23:54:58.237417 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 6 23:54:58.237427 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:54:58.237435 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jul 6 23:54:58.237442 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jul 6 23:54:58.237449 kernel: Using GB pages for direct mapping Jul 6 23:54:58.237456 kernel: Secure boot disabled Jul 6 23:54:58.237466 kernel: ACPI: Early table checksum verification disabled Jul 6 23:54:58.237474 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 6 23:54:58.237489 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237502 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237511 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 6 23:54:58.237519 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 6 23:54:58.237530 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237538 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237545 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237558 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237566 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237576 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237584 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:58.237594 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 6 23:54:58.237602 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jul 6 23:54:58.237610 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 6 23:54:58.237630 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 6 23:54:58.237643 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 6 23:54:58.237651 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 6 23:54:58.237662 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jul 6 23:54:58.237670 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jul 6 23:54:58.237680 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 6 23:54:58.237977 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jul 6 23:54:58.238278 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:54:58.238478 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 6 23:54:58.240686 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 6 23:54:58.240719 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jul 6 23:54:58.240732 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jul 6 23:54:58.240745 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 6 23:54:58.240757 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 6 23:54:58.240768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 6 23:54:58.240779 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 6 23:54:58.240792 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 6 23:54:58.240805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 6 23:54:58.240818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 6 23:54:58.240838 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 6 23:54:58.240852 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 6 23:54:58.240866 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jul 6 23:54:58.240880 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jul 6 23:54:58.240893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jul 6 23:54:58.240907 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jul 6 23:54:58.240922 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jul 6 23:54:58.240936 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jul 6 23:54:58.240951 kernel: Zone ranges: Jul 6 23:54:58.240969 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:54:58.240983 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 6 23:54:58.240996 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 6 23:54:58.241009 kernel: Movable zone start for each node Jul 6 23:54:58.241024 kernel: Early memory node ranges Jul 6 23:54:58.241037 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 6 23:54:58.241052 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jul 6 23:54:58.241066 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 6 23:54:58.241080 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 6 23:54:58.241097 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 6 23:54:58.241111 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:54:58.241125 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 6 23:54:58.241138 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jul 6 23:54:58.241153 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 6 23:54:58.241167 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jul 6 23:54:58.241212 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:54:58.241227 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:54:58.241241 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:54:58.241260 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 6 23:54:58.241273 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:54:58.241288 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 6 23:54:58.241302 kernel: Booting paravirtualized kernel on Hyper-V Jul 6 23:54:58.241316 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:54:58.241330 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:54:58.241344 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:54:58.241358 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:54:58.241371 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:54:58.241388 kernel: Hyper-V: PV spinlocks enabled Jul 6 23:54:58.241403 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:54:58.241419 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:58.241434 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:54:58.241448 kernel: random: crng init done Jul 6 23:54:58.241461 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 6 23:54:58.241474 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:54:58.241489 kernel: Fallback order for Node 0: 0 Jul 6 23:54:58.241507 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jul 6 23:54:58.241532 kernel: Policy zone: Normal Jul 6 23:54:58.241549 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:54:58.241563 kernel: software IO TLB: area num 2. Jul 6 23:54:58.241578 kernel: Memory: 8077072K/8387460K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 310128K reserved, 0K cma-reserved) Jul 6 23:54:58.241593 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:54:58.241608 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:54:58.241634 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:54:58.241646 kernel: Dynamic Preempt: voluntary Jul 6 23:54:58.241659 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:54:58.241673 kernel: rcu: RCU event tracing is enabled. Jul 6 23:54:58.241690 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:54:58.241733 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:54:58.241750 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:54:58.241764 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:54:58.241777 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:54:58.241795 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:54:58.241809 kernel: Using NULL legacy PIC Jul 6 23:54:58.241822 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 6 23:54:58.241836 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:54:58.241850 kernel: Console: colour dummy device 80x25 Jul 6 23:54:58.241863 kernel: printk: console [tty1] enabled Jul 6 23:54:58.241876 kernel: printk: console [ttyS0] enabled Jul 6 23:54:58.241890 kernel: printk: bootconsole [earlyser0] disabled Jul 6 23:54:58.241903 kernel: ACPI: Core revision 20230628 Jul 6 23:54:58.241916 kernel: Failed to register legacy timer interrupt Jul 6 23:54:58.241933 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:54:58.241946 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 6 23:54:58.241963 kernel: Hyper-V: Using IPI hypercalls Jul 6 23:54:58.241978 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 6 23:54:58.241994 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 6 23:54:58.242010 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 6 23:54:58.242027 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 6 23:54:58.242042 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 6 23:54:58.242058 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 6 23:54:58.242078 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jul 6 23:54:58.242094 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 6 23:54:58.242109 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 6 23:54:58.242125 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:54:58.242141 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:54:58.242157 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:54:58.242172 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 6 23:54:58.242188 kernel: RETBleed: Vulnerable Jul 6 23:54:58.242236 kernel: Speculative Store Bypass: Vulnerable Jul 6 23:54:58.242261 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:54:58.242277 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:54:58.242292 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:54:58.242307 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:54:58.242321 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:54:58.242335 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:54:58.242349 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 6 23:54:58.242362 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 6 23:54:58.242376 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 6 23:54:58.242390 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:54:58.242403 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 6 23:54:58.242421 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 6 23:54:58.242434 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 6 23:54:58.242448 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jul 6 23:54:58.242461 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:54:58.242474 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:54:58.242487 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:54:58.242500 kernel: landlock: Up and running. Jul 6 23:54:58.242513 kernel: SELinux: Initializing. Jul 6 23:54:58.242526 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:54:58.242539 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:54:58.242554 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 6 23:54:58.242568 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:58.242587 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:58.242601 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:58.244658 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 6 23:54:58.244679 kernel: signal: max sigframe size: 3632 Jul 6 23:54:58.263006 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:54:58.263257 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:54:58.263444 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:54:58.263608 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:54:58.264008 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:54:58.264026 kernel: .... node #0, CPUs: #1 Jul 6 23:54:58.264036 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 6 23:54:58.264046 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 6 23:54:58.264054 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:54:58.264062 kernel: smpboot: Max logical packages: 1 Jul 6 23:54:58.264071 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jul 6 23:54:58.264079 kernel: devtmpfs: initialized Jul 6 23:54:58.264087 kernel: x86/mm: Memory block size: 128MB Jul 6 23:54:58.264099 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 6 23:54:58.264107 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:54:58.264115 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:54:58.264123 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:54:58.264132 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:54:58.264140 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:54:58.264148 kernel: audit: type=2000 audit(1751846096.030:1): state=initialized audit_enabled=0 res=1 Jul 6 23:54:58.264156 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:54:58.264164 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:54:58.264175 kernel: cpuidle: using governor menu Jul 6 23:54:58.264183 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:54:58.264191 kernel: dca service started, version 1.12.1 Jul 6 23:54:58.264199 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jul 6 23:54:58.264207 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:54:58.264216 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:54:58.264224 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:54:58.264232 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:54:58.264240 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:54:58.264250 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:54:58.264259 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:54:58.264267 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:54:58.264275 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:54:58.264283 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:54:58.264291 kernel: ACPI: Interpreter enabled Jul 6 23:54:58.264299 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:54:58.264307 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:54:58.264315 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:54:58.264325 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 6 23:54:58.264334 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 6 23:54:58.264342 kernel: iommu: Default domain type: Translated Jul 6 23:54:58.264350 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:54:58.264358 kernel: efivars: Registered efivars operations Jul 6 23:54:58.264366 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:54:58.264374 kernel: PCI: System does not support PCI Jul 6 23:54:58.264382 kernel: vgaarb: loaded Jul 6 23:54:58.264390 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jul 6 23:54:58.264400 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:54:58.264408 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:54:58.264416 kernel: pnp: PnP ACPI init Jul 6 23:54:58.264424 kernel: pnp: PnP ACPI: found 3 devices Jul 6 23:54:58.264433 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:54:58.264441 kernel: NET: Registered PF_INET protocol family Jul 6 23:54:58.264449 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:54:58.264457 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 6 23:54:58.264465 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:54:58.264476 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:54:58.264484 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 6 23:54:58.264492 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 6 23:54:58.264500 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 6 23:54:58.264508 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 6 23:54:58.264516 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:54:58.264524 kernel: NET: Registered PF_XDP protocol family Jul 6 23:54:58.264531 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:54:58.264540 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 6 23:54:58.264550 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Jul 6 23:54:58.264558 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:54:58.264566 kernel: Initialise system trusted keyrings Jul 6 23:54:58.264574 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 6 23:54:58.264582 kernel: Key type asymmetric registered Jul 6 23:54:58.264590 kernel: Asymmetric key parser 'x509' registered Jul 6 23:54:58.264598 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:54:58.264606 kernel: io scheduler mq-deadline registered Jul 6 23:54:58.264623 kernel: io scheduler kyber registered Jul 6 23:54:58.264634 kernel: io scheduler bfq registered Jul 6 23:54:58.264642 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:54:58.264650 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:54:58.264658 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:54:58.264666 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 6 23:54:58.264674 kernel: i8042: PNP: No PS/2 controller found. Jul 6 23:54:58.264855 kernel: rtc_cmos 00:02: registered as rtc0 Jul 6 23:54:58.264942 kernel: rtc_cmos 00:02: setting system clock to 2025-07-06T23:54:57 UTC (1751846097) Jul 6 23:54:58.265022 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 6 23:54:58.265033 kernel: intel_pstate: CPU model not supported Jul 6 23:54:58.265042 kernel: efifb: probing for efifb Jul 6 23:54:58.265051 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 6 23:54:58.265059 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 6 23:54:58.265067 kernel: efifb: scrolling: redraw Jul 6 23:54:58.265075 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:54:58.265083 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:54:58.265094 kernel: fb0: EFI VGA frame buffer device Jul 6 23:54:58.265103 kernel: pstore: Using crash dump compression: deflate Jul 6 23:54:58.265111 kernel: pstore: Registered efi_pstore as persistent store backend Jul 6 23:54:58.265119 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:54:58.265127 kernel: Segment Routing with IPv6 Jul 6 23:54:58.265134 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:54:58.265143 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:54:58.265151 kernel: Key type dns_resolver registered Jul 6 23:54:58.265158 kernel: IPI shorthand broadcast: enabled Jul 6 23:54:58.265167 kernel: sched_clock: Marking stable (1004002600, 54684000)->(1336223500, -277536900) Jul 6 23:54:58.265177 kernel: registered taskstats version 1 Jul 6 23:54:58.265185 kernel: Loading compiled-in X.509 certificates Jul 6 23:54:58.265193 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:54:58.265201 kernel: Key type .fscrypt registered Jul 6 23:54:58.265209 kernel: Key type fscrypt-provisioning registered Jul 6 23:54:58.265217 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:54:58.265225 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:54:58.265233 kernel: ima: No architecture policies found Jul 6 23:54:58.265244 kernel: clk: Disabling unused clocks Jul 6 23:54:58.265252 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:54:58.265260 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:54:58.265268 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:54:58.265276 kernel: Run /init as init process Jul 6 23:54:58.265285 kernel: with arguments: Jul 6 23:54:58.265292 kernel: /init Jul 6 23:54:58.265300 kernel: with environment: Jul 6 23:54:58.265308 kernel: HOME=/ Jul 6 23:54:58.265316 kernel: TERM=linux Jul 6 23:54:58.265326 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:54:58.265337 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:54:58.265348 systemd[1]: Detected virtualization microsoft. Jul 6 23:54:58.265356 systemd[1]: Detected architecture x86-64. Jul 6 23:54:58.265364 systemd[1]: Running in initrd. Jul 6 23:54:58.265373 systemd[1]: No hostname configured, using default hostname. Jul 6 23:54:58.265381 systemd[1]: Hostname set to . Jul 6 23:54:58.265392 systemd[1]: Initializing machine ID from random generator. Jul 6 23:54:58.265401 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:54:58.265409 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:54:58.265418 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:54:58.265427 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:54:58.265436 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:54:58.265445 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:54:58.265453 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:54:58.265466 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:54:58.265475 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:54:58.265483 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:54:58.265492 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:54:58.265500 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:54:58.265509 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:54:58.265518 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:54:58.265529 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:54:58.265537 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:54:58.265546 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:54:58.265554 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:54:58.265563 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:54:58.265571 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:54:58.265580 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:54:58.265589 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:54:58.265599 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:54:58.265608 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:54:58.265627 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:54:58.265636 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:54:58.265644 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:54:58.265653 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:54:58.265661 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:54:58.265692 systemd-journald[176]: Collecting audit messages is disabled. Jul 6 23:54:58.265716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:58.265724 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:54:58.265733 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:54:58.265742 systemd-journald[176]: Journal started Jul 6 23:54:58.265764 systemd-journald[176]: Runtime Journal (/run/log/journal/38a3c5f3fcc7435b9e55252e0596bf2e) is 8.0M, max 158.8M, 150.8M free. Jul 6 23:54:58.273779 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:54:58.276765 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:54:58.280923 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:58.288218 systemd-modules-load[177]: Inserted module 'overlay' Jul 6 23:54:58.298817 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:58.320857 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:54:58.339271 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:54:58.342240 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:54:58.346982 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:54:58.349474 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:54:58.364254 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:54:58.368443 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:54:58.376153 kernel: Bridge firewalling registered Jul 6 23:54:58.376248 systemd-modules-load[177]: Inserted module 'br_netfilter' Jul 6 23:54:58.376435 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:58.381760 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:54:58.388140 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:54:58.400813 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:54:58.414552 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:54:58.420792 dracut-cmdline[208]: dracut-dracut-053 Jul 6 23:54:58.423895 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:58.435962 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:54:58.491366 systemd-resolved[222]: Positive Trust Anchors: Jul 6 23:54:58.494152 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:54:58.494209 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:54:58.517608 systemd-resolved[222]: Defaulting to hostname 'linux'. Jul 6 23:54:58.521294 kernel: SCSI subsystem initialized Jul 6 23:54:58.518964 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:54:58.526553 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:54:58.535633 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:54:58.547640 kernel: iscsi: registered transport (tcp) Jul 6 23:54:58.585669 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:54:58.585741 kernel: QLogic iSCSI HBA Driver Jul 6 23:54:58.623199 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:54:58.629803 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:54:58.657310 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:54:58.657420 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:54:58.660555 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:54:58.701646 kernel: raid6: avx512x4 gen() 18043 MB/s Jul 6 23:54:58.720630 kernel: raid6: avx512x2 gen() 17932 MB/s Jul 6 23:54:58.739624 kernel: raid6: avx512x1 gen() 18007 MB/s Jul 6 23:54:58.759638 kernel: raid6: avx2x4 gen() 17964 MB/s Jul 6 23:54:58.778649 kernel: raid6: avx2x2 gen() 17660 MB/s Jul 6 23:54:58.798794 kernel: raid6: avx2x1 gen() 13665 MB/s Jul 6 23:54:58.798868 kernel: raid6: using algorithm avx512x4 gen() 18043 MB/s Jul 6 23:54:58.826103 kernel: raid6: .... xor() 5425 MB/s, rmw enabled Jul 6 23:54:58.826174 kernel: raid6: using avx512x2 recovery algorithm Jul 6 23:54:58.851643 kernel: xor: automatically using best checksumming function avx Jul 6 23:54:58.997643 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:54:59.007784 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:54:59.020889 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:54:59.033892 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jul 6 23:54:59.038530 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:54:59.060853 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:54:59.074122 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jul 6 23:54:59.102594 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:54:59.113791 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:54:59.158243 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:54:59.175657 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:54:59.217062 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:54:59.225062 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:54:59.229978 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:54:59.233294 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:54:59.239161 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:54:59.250779 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:54:59.258017 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:54:59.258061 kernel: AES CTR mode by8 optimization enabled Jul 6 23:54:59.266637 kernel: hv_vmbus: Vmbus version:5.2 Jul 6 23:54:59.292637 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 6 23:54:59.303884 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:54:59.322780 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 6 23:54:59.322811 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 6 23:54:59.322831 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 6 23:54:59.341669 kernel: PTP clock support registered Jul 6 23:54:59.352200 kernel: hv_vmbus: registering driver hv_netvsc Jul 6 23:54:59.352253 kernel: hv_utils: Registering HyperV Utility Driver Jul 6 23:54:59.353638 kernel: hv_vmbus: registering driver hv_utils Jul 6 23:54:59.357789 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:54:59.366355 kernel: hv_utils: Heartbeat IC version 3.0 Jul 6 23:54:59.366383 kernel: hv_utils: Shutdown IC version 3.2 Jul 6 23:54:59.366401 kernel: hv_utils: TimeSync IC version 4.0 Jul 6 23:54:59.358039 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:59.512540 systemd-resolved[222]: Clock change detected. Flushing caches. Jul 6 23:54:59.515868 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:59.532742 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:54:59.518468 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:54:59.518674 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:59.521423 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:59.544806 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:59.556484 kernel: hv_vmbus: registering driver hid_hyperv Jul 6 23:54:59.565648 kernel: hv_vmbus: registering driver hv_storvsc Jul 6 23:54:59.565712 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 6 23:54:59.565737 kernel: scsi host0: storvsc_host_t Jul 6 23:54:59.571247 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 6 23:54:59.571446 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 6 23:54:59.574956 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:54:59.577068 kernel: scsi host1: storvsc_host_t Jul 6 23:54:59.575074 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:59.583511 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 6 23:54:59.596980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:59.613346 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 6 23:54:59.613739 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:54:59.618485 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 6 23:54:59.623927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:59.637615 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 6 23:54:59.637881 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 6 23:54:59.641539 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:54:59.644649 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 6 23:54:59.644852 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 6 23:54:59.645846 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:59.656072 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:54:59.656105 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:54:59.679232 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:59.731276 kernel: hv_netvsc 7ced8d2c-eaf4-7ced-8d2c-eaf47ced8d2c eth0: VF slot 1 added Jul 6 23:54:59.740488 kernel: hv_vmbus: registering driver hv_pci Jul 6 23:54:59.749248 kernel: hv_pci ba63851c-f7db-47d9-ba80-959063118e13: PCI VMBus probing: Using version 0x10004 Jul 6 23:54:59.749491 kernel: hv_pci ba63851c-f7db-47d9-ba80-959063118e13: PCI host bridge to bus f7db:00 Jul 6 23:54:59.753665 kernel: pci_bus f7db:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jul 6 23:54:59.756647 kernel: pci_bus f7db:00: No busn resource found for root bus, will use [bus 00-ff] Jul 6 23:54:59.764720 kernel: pci f7db:00:02.0: [15b3:1016] type 00 class 0x020000 Jul 6 23:54:59.769523 kernel: pci f7db:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 6 23:54:59.775712 kernel: pci f7db:00:02.0: enabling Extended Tags Jul 6 23:54:59.795499 kernel: pci f7db:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f7db:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 6 23:54:59.802519 kernel: pci_bus f7db:00: busn_res: [bus 00-ff] end is updated to 00 Jul 6 23:54:59.802809 kernel: pci f7db:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 6 23:54:59.997410 kernel: mlx5_core f7db:00:02.0: enabling device (0000 -> 0002) Jul 6 23:54:59.997675 kernel: mlx5_core f7db:00:02.0: firmware version: 14.30.5000 Jul 6 23:55:00.103626 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 6 23:55:00.188496 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (461) Jul 6 23:55:00.206962 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:55:00.218162 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (446) Jul 6 23:55:00.235735 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 6 23:55:00.239353 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 6 23:55:00.264425 kernel: hv_netvsc 7ced8d2c-eaf4-7ced-8d2c-eaf47ced8d2c eth0: VF registering: eth1 Jul 6 23:55:00.264733 kernel: mlx5_core f7db:00:02.0 eth1: joined to eth0 Jul 6 23:55:00.262279 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:55:00.278746 kernel: mlx5_core f7db:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 6 23:55:00.298489 kernel: mlx5_core f7db:00:02.0 enP63451s1: renamed from eth1 Jul 6 23:55:00.492306 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 6 23:55:01.295335 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:01.295406 disk-uuid[601]: The operation has completed successfully. Jul 6 23:55:01.387397 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:55:01.387527 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:55:01.401677 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:55:01.425732 sh[718]: Success Jul 6 23:55:01.464586 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:55:01.658494 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:55:01.672238 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:55:01.687721 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:55:01.721449 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:55:01.721570 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:01.725798 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:55:01.729565 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:55:01.733033 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:55:02.006504 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:55:02.010343 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:55:02.032874 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:55:02.042208 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:55:02.063414 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:02.063525 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:02.066400 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:02.118115 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:02.127341 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:55:02.133488 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:02.136142 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:02.150645 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:02.155009 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:55:02.171788 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:55:02.177530 systemd-networkd[900]: lo: Link UP Jul 6 23:55:02.177533 systemd-networkd[900]: lo: Gained carrier Jul 6 23:55:02.179753 systemd-networkd[900]: Enumeration completed Jul 6 23:55:02.180307 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:02.182229 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:02.182232 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:02.184877 systemd[1]: Reached target network.target - Network. Jul 6 23:55:02.269488 kernel: mlx5_core f7db:00:02.0 enP63451s1: Link up Jul 6 23:55:02.302482 kernel: hv_netvsc 7ced8d2c-eaf4-7ced-8d2c-eaf47ced8d2c eth0: Data path switched to VF: enP63451s1 Jul 6 23:55:02.302443 systemd-networkd[900]: enP63451s1: Link UP Jul 6 23:55:02.302798 systemd-networkd[900]: eth0: Link UP Jul 6 23:55:02.302979 systemd-networkd[900]: eth0: Gained carrier Jul 6 23:55:02.302990 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:02.322124 systemd-networkd[900]: enP63451s1: Gained carrier Jul 6 23:55:02.354573 systemd-networkd[900]: eth0: DHCPv4 address 10.200.8.46/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 6 23:55:02.964005 ignition[902]: Ignition 2.19.0 Jul 6 23:55:02.964019 ignition[902]: Stage: fetch-offline Jul 6 23:55:02.964067 ignition[902]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:02.964079 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:02.964207 ignition[902]: parsed url from cmdline: "" Jul 6 23:55:02.964212 ignition[902]: no config URL provided Jul 6 23:55:02.964219 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:02.964230 ignition[902]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:02.964239 ignition[902]: failed to fetch config: resource requires networking Jul 6 23:55:02.986894 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:02.972083 ignition[902]: Ignition finished successfully Jul 6 23:55:03.013795 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:55:03.035682 ignition[910]: Ignition 2.19.0 Jul 6 23:55:03.035693 ignition[910]: Stage: fetch Jul 6 23:55:03.035943 ignition[910]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:03.035958 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:03.036922 ignition[910]: parsed url from cmdline: "" Jul 6 23:55:03.037015 ignition[910]: no config URL provided Jul 6 23:55:03.037084 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:03.037305 ignition[910]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:03.037745 ignition[910]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 6 23:55:03.128674 ignition[910]: GET result: OK Jul 6 23:55:03.128821 ignition[910]: config has been read from IMDS userdata Jul 6 23:55:03.128866 ignition[910]: parsing config with SHA512: 209716e91e32fc4960d46b4e7d9a040341a83a25377b086e960d997a0e3d45fe0e2ba22c2e4e40e679189840aa262921dfe6bb8c3b3f9c3e39bc85d6776df631 Jul 6 23:55:03.136593 unknown[910]: fetched base config from "system" Jul 6 23:55:03.137376 ignition[910]: fetch: fetch complete Jul 6 23:55:03.136610 unknown[910]: fetched base config from "system" Jul 6 23:55:03.137385 ignition[910]: fetch: fetch passed Jul 6 23:55:03.136626 unknown[910]: fetched user config from "azure" Jul 6 23:55:03.137452 ignition[910]: Ignition finished successfully Jul 6 23:55:03.154052 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:55:03.167774 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:55:03.186923 ignition[916]: Ignition 2.19.0 Jul 6 23:55:03.186935 ignition[916]: Stage: kargs Jul 6 23:55:03.187175 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:03.187189 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:03.188095 ignition[916]: kargs: kargs passed Jul 6 23:55:03.188144 ignition[916]: Ignition finished successfully Jul 6 23:55:03.202260 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:55:03.213877 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:55:03.232113 ignition[922]: Ignition 2.19.0 Jul 6 23:55:03.232125 ignition[922]: Stage: disks Jul 6 23:55:03.235029 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:55:03.232358 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:03.241552 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:03.232372 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:03.250001 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:55:03.233283 ignition[922]: disks: disks passed Jul 6 23:55:03.256833 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:03.233331 ignition[922]: Ignition finished successfully Jul 6 23:55:03.279021 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:03.284406 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:03.303794 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:55:03.366162 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 6 23:55:03.374431 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:55:03.397610 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:55:03.513579 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:55:03.514207 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:55:03.516916 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:55:03.550656 systemd-networkd[900]: eth0: Gained IPv6LL Jul 6 23:55:03.554722 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:03.561686 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:55:03.573489 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Jul 6 23:55:03.582067 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:03.582169 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:03.585007 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:03.591560 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:03.591346 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:55:03.593583 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:55:03.593630 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:03.620140 systemd-networkd[900]: enP63451s1: Gained IPv6LL Jul 6 23:55:03.624240 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:03.631983 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:55:03.645761 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:55:04.202505 coreos-metadata[944]: Jul 06 23:55:04.202 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:55:04.211592 coreos-metadata[944]: Jul 06 23:55:04.211 INFO Fetch successful Jul 6 23:55:04.215361 coreos-metadata[944]: Jul 06 23:55:04.211 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:55:04.224391 coreos-metadata[944]: Jul 06 23:55:04.222 INFO Fetch successful Jul 6 23:55:04.237791 coreos-metadata[944]: Jul 06 23:55:04.237 INFO wrote hostname ci-4081.3.4-a-6a836f1a00 to /sysroot/etc/hostname Jul 6 23:55:04.240568 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:55:04.271103 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:55:04.348180 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:55:04.369421 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:55:04.378918 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:55:05.204361 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:05.218712 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:55:05.233072 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:55:05.248326 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:55:05.261558 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:05.277909 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:55:05.299455 ignition[1062]: INFO : Ignition 2.19.0 Jul 6 23:55:05.299455 ignition[1062]: INFO : Stage: mount Jul 6 23:55:05.304703 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:05.304703 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:05.304703 ignition[1062]: INFO : mount: mount passed Jul 6 23:55:05.304703 ignition[1062]: INFO : Ignition finished successfully Jul 6 23:55:05.302679 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:55:05.329692 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:55:05.339147 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:05.355490 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Jul 6 23:55:05.364064 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:05.364160 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:05.367066 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:05.372892 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:05.374690 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:05.410551 ignition[1089]: INFO : Ignition 2.19.0 Jul 6 23:55:05.410551 ignition[1089]: INFO : Stage: files Jul 6 23:55:05.418388 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:05.418388 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:05.418388 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:55:05.484029 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:55:05.484029 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:55:05.541756 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:55:05.549249 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:55:05.549249 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:55:05.542273 unknown[1089]: wrote ssh authorized keys file for user: core Jul 6 23:55:05.569073 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:55:05.574272 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 6 23:55:05.645066 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:55:05.809686 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:55:05.809686 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:05.824414 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:05.824414 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:05.824414 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:05.824414 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:05.824414 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:05.824414 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:05.860176 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:05.860176 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:05.860176 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:05.860176 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:05.860176 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:05.860176 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:05.905039 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 6 23:55:06.709531 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:55:07.033594 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:07.033594 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:55:07.062165 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:07.074920 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:07.074920 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:55:07.074920 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:07.074920 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:07.074920 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:07.074920 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:07.074920 ignition[1089]: INFO : files: files passed Jul 6 23:55:07.074920 ignition[1089]: INFO : Ignition finished successfully Jul 6 23:55:07.064077 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:55:07.092750 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:55:07.100647 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:55:07.105066 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:55:07.105165 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:55:07.138696 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:07.138696 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:07.155134 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:07.140793 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:07.148837 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:55:07.165602 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:55:07.190891 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:55:07.191019 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:55:07.199332 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:55:07.202131 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:55:07.207148 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:55:07.214713 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:55:07.228373 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:07.239627 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:55:07.250211 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:07.250423 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:07.251576 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:55:07.251978 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:55:07.252114 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:07.253144 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:55:07.253697 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:55:07.253989 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:55:07.254432 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:07.255239 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:07.255643 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:55:07.256032 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:07.256540 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:55:07.256903 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:55:07.257351 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:55:07.257712 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:55:07.257858 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:07.258924 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:07.259333 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:07.260068 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:55:07.297441 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:07.315176 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:55:07.315336 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:07.429367 ignition[1142]: INFO : Ignition 2.19.0 Jul 6 23:55:07.429367 ignition[1142]: INFO : Stage: umount Jul 6 23:55:07.429367 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:07.429367 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:07.429367 ignition[1142]: INFO : umount: umount passed Jul 6 23:55:07.429367 ignition[1142]: INFO : Ignition finished successfully Jul 6 23:55:07.326259 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:55:07.327844 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:07.336255 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:55:07.338603 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:55:07.343304 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:55:07.345739 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:55:07.375572 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:55:07.389805 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:55:07.390121 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:07.422266 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:55:07.429397 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:55:07.430726 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:07.434868 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:55:07.435035 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:55:07.442277 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:55:07.443511 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:55:07.462668 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:55:07.463017 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:55:07.467111 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:55:07.467172 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:55:07.467397 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:55:07.467432 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:55:07.468153 systemd[1]: Stopped target network.target - Network. Jul 6 23:55:07.471127 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:55:07.471182 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:07.471550 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:55:07.471906 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:55:07.498121 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:07.502101 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:55:07.504509 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:55:07.507307 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:55:07.507360 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:55:07.523218 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:55:07.523276 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:55:07.528855 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:55:07.528930 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:55:07.533997 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:55:07.534061 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:55:07.541315 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:55:07.546704 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:55:07.552957 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:55:07.553560 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:55:07.553657 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:55:07.561534 systemd-networkd[900]: eth0: DHCPv6 lease lost Jul 6 23:55:07.574693 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:55:07.574814 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:55:07.593406 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:55:07.593524 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:55:07.603799 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:55:07.603868 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:07.623893 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:55:07.632794 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:55:07.632888 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:07.636132 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:55:07.636197 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:07.640900 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:55:07.640962 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:07.652679 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:55:07.655002 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:07.662305 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:07.687869 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:55:07.688023 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:07.693556 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:55:07.693645 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:07.698610 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:55:07.698658 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:07.724939 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:55:07.725036 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:55:07.733930 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:55:07.734017 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:55:07.743504 kernel: hv_netvsc 7ced8d2c-eaf4-7ced-8d2c-eaf47ced8d2c eth0: Data path switched from VF: enP63451s1 Jul 6 23:55:07.744976 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:07.745066 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:07.758659 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:55:07.764852 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:55:07.764953 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:07.773988 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:07.774064 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:07.788010 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:55:07.788155 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:55:07.795694 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:55:07.795818 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:55:08.062776 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:55:08.065838 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:55:08.078000 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:55:08.081493 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:55:08.081596 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:08.099659 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:55:08.457233 systemd[1]: Switching root. Jul 6 23:55:08.585393 systemd-journald[176]: Journal stopped Jul 6 23:55:12.623320 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jul 6 23:55:12.623351 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:55:12.623362 kernel: SELinux: policy capability open_perms=1 Jul 6 23:55:12.623371 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:55:12.623382 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:55:12.623390 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:55:12.623399 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:55:12.623413 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:55:12.623421 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:55:12.623434 kernel: audit: type=1403 audit(1751846109.576:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:55:12.623444 systemd[1]: Successfully loaded SELinux policy in 126.903ms. Jul 6 23:55:12.623456 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.751ms. Jul 6 23:55:12.623479 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:55:12.623492 systemd[1]: Detected virtualization microsoft. Jul 6 23:55:12.623507 systemd[1]: Detected architecture x86-64. Jul 6 23:55:12.623518 systemd[1]: Detected first boot. Jul 6 23:55:12.623531 systemd[1]: Hostname set to . Jul 6 23:55:12.623540 systemd[1]: Initializing machine ID from random generator. Jul 6 23:55:12.623553 zram_generator::config[1185]: No configuration found. Jul 6 23:55:12.623566 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:55:12.623577 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:55:12.623588 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:55:12.623599 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:55:12.623613 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:55:12.623623 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:55:12.623636 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:55:12.623651 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:55:12.623661 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:55:12.623671 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:55:12.623683 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:55:12.623694 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:55:12.623705 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:12.623717 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:12.623728 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:55:12.623742 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:55:12.623754 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:55:12.623765 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:55:12.623777 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:55:12.623788 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:12.623800 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:55:12.623814 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:55:12.623825 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:55:12.623839 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:55:12.623851 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:12.623862 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:55:12.623874 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:55:12.623885 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:55:12.623898 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:55:12.623908 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:55:12.623923 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:12.623934 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:12.623947 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:12.623960 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:55:12.623971 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:55:12.623986 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:55:12.623999 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:55:12.624010 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:12.624023 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:55:12.624036 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:55:12.624047 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:55:12.624060 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:55:12.624071 systemd[1]: Reached target machines.target - Containers. Jul 6 23:55:12.624086 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:55:12.624097 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:12.624109 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:55:12.624123 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:55:12.624134 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:12.624149 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:55:12.624167 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:12.624188 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:55:12.624209 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:12.624239 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:55:12.624263 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:55:12.624282 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:55:12.624304 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:55:12.624325 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:55:12.624346 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:55:12.624368 kernel: loop: module loaded Jul 6 23:55:12.624390 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:55:12.624416 kernel: fuse: init (API version 7.39) Jul 6 23:55:12.624435 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:55:12.624456 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:55:12.624510 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:55:12.624562 systemd-journald[1288]: Collecting audit messages is disabled. Jul 6 23:55:12.624611 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:55:12.624634 systemd[1]: Stopped verity-setup.service. Jul 6 23:55:12.624657 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:12.624680 kernel: ACPI: bus type drm_connector registered Jul 6 23:55:12.624699 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:55:12.624721 systemd-journald[1288]: Journal started Jul 6 23:55:12.624763 systemd-journald[1288]: Runtime Journal (/run/log/journal/e228697aaa2f48439099c8829d46b850) is 8.0M, max 158.8M, 150.8M free. Jul 6 23:55:11.768045 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:55:11.918856 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:55:11.919235 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:55:12.633311 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:55:12.633986 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:55:12.636952 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:55:12.639663 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:55:12.643675 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:55:12.646851 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:55:12.649539 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:55:12.652902 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:12.658061 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:55:12.658386 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:55:12.662980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:12.663305 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:12.666913 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:55:12.667229 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:55:12.670760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:12.671024 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:12.675050 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:55:12.675341 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:55:12.678859 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:12.679148 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:12.682599 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:12.686347 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:55:12.690308 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:55:12.712133 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:55:12.722554 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:55:12.732982 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:55:12.736828 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:55:12.736877 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:12.740559 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:55:12.745124 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:55:12.752593 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:55:12.755553 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:12.775832 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:55:12.780135 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:55:12.783533 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:55:12.786656 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:55:12.791683 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:55:12.794294 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:55:12.807693 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:55:12.818285 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:55:12.825365 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:12.834308 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:55:12.840228 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:55:12.852890 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:55:12.865616 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:55:12.878613 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:55:12.887379 systemd-journald[1288]: Time spent on flushing to /var/log/journal/e228697aaa2f48439099c8829d46b850 is 51.408ms for 958 entries. Jul 6 23:55:12.887379 systemd-journald[1288]: System Journal (/var/log/journal/e228697aaa2f48439099c8829d46b850) is 8.0M, max 2.6G, 2.6G free. Jul 6 23:55:13.065984 systemd-journald[1288]: Received client request to flush runtime journal. Jul 6 23:55:13.066071 kernel: loop0: detected capacity change from 0 to 140768 Jul 6 23:55:12.893800 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:55:12.898260 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:55:12.918971 udevadm[1332]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:55:12.946945 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:13.028647 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:55:13.040675 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:55:13.068442 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:55:13.089422 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:55:13.090629 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:55:13.140169 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 6 23:55:13.140193 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 6 23:55:13.148579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:13.441493 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:55:13.481491 kernel: loop1: detected capacity change from 0 to 142488 Jul 6 23:55:14.292493 kernel: loop2: detected capacity change from 0 to 221472 Jul 6 23:55:14.331755 kernel: loop3: detected capacity change from 0 to 31056 Jul 6 23:55:14.331937 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:55:14.341691 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:14.368707 systemd-udevd[1346]: Using default interface naming scheme 'v255'. Jul 6 23:55:14.551979 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:14.566841 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:14.645348 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:55:14.741643 kernel: loop4: detected capacity change from 0 to 140768 Jul 6 23:55:14.780906 kernel: loop5: detected capacity change from 0 to 142488 Jul 6 23:55:14.771554 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:55:14.790937 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:55:14.800619 kernel: loop6: detected capacity change from 0 to 221472 Jul 6 23:55:14.818562 kernel: loop7: detected capacity change from 0 to 31056 Jul 6 23:55:14.826040 (sd-merge)[1376]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 6 23:55:14.827097 (sd-merge)[1376]: Merged extensions into '/usr'. Jul 6 23:55:14.840964 systemd[1]: Reloading requested from client PID 1321 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:55:14.840984 systemd[1]: Reloading... Jul 6 23:55:14.853489 kernel: hv_vmbus: registering driver hv_balloon Jul 6 23:55:14.861127 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 6 23:55:14.924511 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:55:14.948002 kernel: hv_vmbus: registering driver hyperv_fb Jul 6 23:55:14.951575 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 6 23:55:14.968537 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 6 23:55:14.983487 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:55:14.994502 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:55:15.140488 zram_generator::config[1423]: No configuration found. Jul 6 23:55:15.142285 systemd-networkd[1350]: lo: Link UP Jul 6 23:55:15.145502 systemd-networkd[1350]: lo: Gained carrier Jul 6 23:55:15.155304 systemd-networkd[1350]: Enumeration completed Jul 6 23:55:15.157989 systemd-networkd[1350]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:15.161996 systemd-networkd[1350]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:15.257492 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1363) Jul 6 23:55:15.278235 kernel: mlx5_core f7db:00:02.0 enP63451s1: Link up Jul 6 23:55:15.298505 kernel: hv_netvsc 7ced8d2c-eaf4-7ced-8d2c-eaf47ced8d2c eth0: Data path switched to VF: enP63451s1 Jul 6 23:55:15.299909 systemd-networkd[1350]: enP63451s1: Link UP Jul 6 23:55:15.300034 systemd-networkd[1350]: eth0: Link UP Jul 6 23:55:15.300039 systemd-networkd[1350]: eth0: Gained carrier Jul 6 23:55:15.300060 systemd-networkd[1350]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:15.305962 systemd-networkd[1350]: enP63451s1: Gained carrier Jul 6 23:55:15.341582 systemd-networkd[1350]: eth0: DHCPv4 address 10.200.8.46/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 6 23:55:15.446494 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 6 23:55:15.469847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:15.571914 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:55:15.577182 systemd[1]: Reloading finished in 735 ms. Jul 6 23:55:15.611998 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:15.615937 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:55:15.660370 systemd[1]: Starting ensure-sysext.service... Jul 6 23:55:15.676555 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:55:15.687259 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:55:15.700957 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:55:15.718827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:15.735435 systemd[1]: Reloading requested from client PID 1515 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:55:15.735474 systemd[1]: Reloading... Jul 6 23:55:15.777125 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:55:15.777746 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:55:15.778707 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:55:15.779040 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Jul 6 23:55:15.779128 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Jul 6 23:55:15.815078 systemd-tmpfiles[1518]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:55:15.815095 systemd-tmpfiles[1518]: Skipping /boot Jul 6 23:55:15.823486 zram_generator::config[1550]: No configuration found. Jul 6 23:55:15.861312 systemd-tmpfiles[1518]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:55:15.862648 systemd-tmpfiles[1518]: Skipping /boot Jul 6 23:55:16.017344 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:16.110792 systemd[1]: Reloading finished in 374 ms. Jul 6 23:55:16.131100 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:55:16.139984 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:55:16.143899 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:16.159724 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:55:16.163396 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:55:16.168775 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:55:16.177709 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:55:16.182186 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:55:16.192210 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:55:16.205221 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:16.205502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:16.212272 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:16.225825 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:16.232961 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:16.250839 lvm[1619]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:55:16.244241 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:16.244437 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:16.245853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:16.246559 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:16.255730 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:16.256310 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:16.276952 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:16.277331 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:16.286418 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:16.299601 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:16.307928 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:16.308380 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:16.311496 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:55:16.322116 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:16.322500 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:16.325105 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:16.325284 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:16.325988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:16.326153 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:16.336700 systemd[1]: Finished ensure-sysext.service. Jul 6 23:55:16.339402 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:16.340685 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:16.340908 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:16.351146 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:55:16.359492 lvm[1647]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:55:16.366917 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:16.382745 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:55:16.387140 systemd-resolved[1622]: Positive Trust Anchors: Jul 6 23:55:16.387150 systemd-resolved[1622]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:55:16.387200 systemd-resolved[1622]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:55:16.393964 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:16.410709 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:16.414661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:16.414755 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:55:16.420435 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:16.421555 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:55:16.425154 augenrules[1654]: No rules Jul 6 23:55:16.426210 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:55:16.426744 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:55:16.427475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:16.427601 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:16.428000 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:55:16.428116 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:55:16.428458 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:16.428590 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:16.428895 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:16.429009 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:16.434617 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:55:16.434712 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:55:16.440756 systemd-resolved[1622]: Using system hostname 'ci-4081.3.4-a-6a836f1a00'. Jul 6 23:55:16.442980 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:55:16.443237 systemd[1]: Reached target network.target - Network. Jul 6 23:55:16.451519 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:16.463412 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:55:16.597570 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:16.726011 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:55:16.730432 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:55:16.797800 systemd-networkd[1350]: enP63451s1: Gained IPv6LL Jul 6 23:55:16.925783 systemd-networkd[1350]: eth0: Gained IPv6LL Jul 6 23:55:16.928842 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:55:16.932390 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:55:18.662562 ldconfig[1316]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:55:18.674620 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:55:18.682798 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:55:18.727179 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:55:18.731284 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:18.735197 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:55:18.739481 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:55:18.743688 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:55:18.747620 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:55:18.751798 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:55:18.755944 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:55:18.755991 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:55:18.759392 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:55:18.765950 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:55:18.786850 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:55:18.813322 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:55:18.817500 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:55:18.821524 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:55:18.824887 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:18.828008 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:55:18.828060 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:55:18.837628 systemd[1]: Starting chronyd.service - NTP client/server... Jul 6 23:55:18.845613 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:55:18.858770 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:55:18.879264 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:55:18.886134 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:55:18.891416 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:55:18.895586 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:55:18.895644 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jul 6 23:55:18.902846 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 6 23:55:18.904005 (chronyd)[1678]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 6 23:55:18.906201 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 6 23:55:18.909612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:18.914423 jq[1682]: false Jul 6 23:55:18.922679 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:55:18.927536 chronyd[1691]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 6 23:55:18.927657 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:55:18.935294 KVP[1686]: KVP starting; pid is:1686 Jul 6 23:55:18.939642 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:55:18.951593 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:55:18.959335 kernel: hv_utils: KVP IC version 4.0 Jul 6 23:55:18.958366 KVP[1686]: KVP LIC Version: 3.1 Jul 6 23:55:18.958514 chronyd[1691]: Timezone right/UTC failed leap second check, ignoring Jul 6 23:55:18.958777 chronyd[1691]: Loaded seccomp filter (level 2) Jul 6 23:55:18.963750 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:55:18.969212 extend-filesystems[1685]: Found loop4 Jul 6 23:55:18.973358 extend-filesystems[1685]: Found loop5 Jul 6 23:55:18.973358 extend-filesystems[1685]: Found loop6 Jul 6 23:55:18.973358 extend-filesystems[1685]: Found loop7 Jul 6 23:55:18.973358 extend-filesystems[1685]: Found sda Jul 6 23:55:18.973358 extend-filesystems[1685]: Found sda1 Jul 6 23:55:18.973358 extend-filesystems[1685]: Found sda2 Jul 6 23:55:18.973358 extend-filesystems[1685]: Found sda3 Jul 6 23:55:18.973358 extend-filesystems[1685]: Found usr Jul 6 23:55:18.973358 extend-filesystems[1685]: Found sda4 Jul 6 23:55:18.973358 extend-filesystems[1685]: Found sda6 Jul 6 23:55:18.973358 extend-filesystems[1685]: Found sda7 Jul 6 23:55:18.973358 extend-filesystems[1685]: Found sda9 Jul 6 23:55:18.973358 extend-filesystems[1685]: Checking size of /dev/sda9 Jul 6 23:55:18.997062 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:55:18.998116 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:55:18.998622 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:55:19.008858 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:55:19.021628 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:55:19.025906 systemd[1]: Started chronyd.service - NTP client/server. Jul 6 23:55:19.029905 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:55:19.030613 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:55:19.033931 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:55:19.034719 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:55:19.041720 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:55:19.051125 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:55:19.051363 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:55:19.068951 extend-filesystems[1685]: Old size kept for /dev/sda9 Jul 6 23:55:19.077641 jq[1711]: true Jul 6 23:55:19.092373 extend-filesystems[1685]: Found sr0 Jul 6 23:55:19.091994 dbus-daemon[1681]: [system] SELinux support is enabled Jul 6 23:55:19.094837 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:55:19.095137 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:55:19.100850 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:55:19.111272 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:55:19.111316 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:55:19.117678 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:55:19.117719 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:55:19.122532 update_engine[1707]: I20250706 23:55:19.122362 1707 main.cc:92] Flatcar Update Engine starting Jul 6 23:55:19.124617 update_engine[1707]: I20250706 23:55:19.124576 1707 update_check_scheduler.cc:74] Next update check in 3m22s Jul 6 23:55:19.133591 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:55:19.142943 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:55:19.157281 jq[1728]: true Jul 6 23:55:19.164811 (ntainerd)[1734]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:55:19.189248 tar[1717]: linux-amd64/helm Jul 6 23:55:19.273486 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1746) Jul 6 23:55:19.294928 systemd-logind[1705]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:55:19.304714 systemd-logind[1705]: New seat seat0. Jul 6 23:55:19.305189 coreos-metadata[1680]: Jul 06 23:55:19.305 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:55:19.309142 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:55:19.313314 coreos-metadata[1680]: Jul 06 23:55:19.313 INFO Fetch successful Jul 6 23:55:19.314628 coreos-metadata[1680]: Jul 06 23:55:19.314 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 6 23:55:19.322488 coreos-metadata[1680]: Jul 06 23:55:19.321 INFO Fetch successful Jul 6 23:55:19.322488 coreos-metadata[1680]: Jul 06 23:55:19.321 INFO Fetching http://168.63.129.16/machine/1dcaadf9-b09d-4e6b-953d-b2c8c9837f26/30ed41da%2D0471%2D4ada%2D8ec5%2De83194319847.%5Fci%2D4081.3.4%2Da%2D6a836f1a00?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 6 23:55:19.328742 coreos-metadata[1680]: Jul 06 23:55:19.328 INFO Fetch successful Jul 6 23:55:19.330576 coreos-metadata[1680]: Jul 06 23:55:19.330 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:55:19.343647 coreos-metadata[1680]: Jul 06 23:55:19.343 INFO Fetch successful Jul 6 23:55:19.359851 sshd_keygen[1727]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:55:19.418290 bash[1767]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:55:19.420263 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:55:19.473156 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:55:19.476566 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:55:19.502824 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:55:19.523906 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 6 23:55:19.534612 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:55:19.549803 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:55:19.550823 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:55:19.557192 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:55:19.564135 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:55:19.624611 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:55:19.641099 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:55:19.655324 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:55:19.662505 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:55:19.671678 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 6 23:55:19.686330 locksmithd[1737]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:55:20.085700 tar[1717]: linux-amd64/LICENSE Jul 6 23:55:20.085700 tar[1717]: linux-amd64/README.md Jul 6 23:55:20.099596 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:55:20.256869 containerd[1734]: time="2025-07-06T23:55:20.256777500Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:55:20.295184 containerd[1734]: time="2025-07-06T23:55:20.294943100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:20.297028 containerd[1734]: time="2025-07-06T23:55:20.296979800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:20.297629 containerd[1734]: time="2025-07-06T23:55:20.297143600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:55:20.297629 containerd[1734]: time="2025-07-06T23:55:20.297172100Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:55:20.297629 containerd[1734]: time="2025-07-06T23:55:20.297352900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:55:20.297629 containerd[1734]: time="2025-07-06T23:55:20.297374300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:20.297629 containerd[1734]: time="2025-07-06T23:55:20.297493800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:20.297629 containerd[1734]: time="2025-07-06T23:55:20.297524500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:20.298103 containerd[1734]: time="2025-07-06T23:55:20.298073000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:20.298181 containerd[1734]: time="2025-07-06T23:55:20.298165400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:20.298275 containerd[1734]: time="2025-07-06T23:55:20.298255700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:20.298341 containerd[1734]: time="2025-07-06T23:55:20.298328300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:20.298790 containerd[1734]: time="2025-07-06T23:55:20.298508500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:20.298790 containerd[1734]: time="2025-07-06T23:55:20.298753900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:20.299006 containerd[1734]: time="2025-07-06T23:55:20.298975800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:20.299006 containerd[1734]: time="2025-07-06T23:55:20.299000000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:55:20.299132 containerd[1734]: time="2025-07-06T23:55:20.299111100Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:55:20.299193 containerd[1734]: time="2025-07-06T23:55:20.299173800Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:55:20.364957 containerd[1734]: time="2025-07-06T23:55:20.364792300Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:55:20.364957 containerd[1734]: time="2025-07-06T23:55:20.364912800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:55:20.364957 containerd[1734]: time="2025-07-06T23:55:20.364940400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:55:20.364957 containerd[1734]: time="2025-07-06T23:55:20.364959400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:55:20.366873 containerd[1734]: time="2025-07-06T23:55:20.364976900Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:55:20.366873 containerd[1734]: time="2025-07-06T23:55:20.365193300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:55:20.366873 containerd[1734]: time="2025-07-06T23:55:20.366345000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:55:20.366873 containerd[1734]: time="2025-07-06T23:55:20.366591900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:55:20.366873 containerd[1734]: time="2025-07-06T23:55:20.366618700Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:55:20.366873 containerd[1734]: time="2025-07-06T23:55:20.366637000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:55:20.366873 containerd[1734]: time="2025-07-06T23:55:20.366655900Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.366873 containerd[1734]: time="2025-07-06T23:55:20.366690900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.366873 containerd[1734]: time="2025-07-06T23:55:20.366706500Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.366873 containerd[1734]: time="2025-07-06T23:55:20.366727600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.366873 containerd[1734]: time="2025-07-06T23:55:20.366745700Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.366873 containerd[1734]: time="2025-07-06T23:55:20.366764900Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.366873 containerd[1734]: time="2025-07-06T23:55:20.366781300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.366873 containerd[1734]: time="2025-07-06T23:55:20.366800100Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.367344 containerd[1734]: time="2025-07-06T23:55:20.366827500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367344 containerd[1734]: time="2025-07-06T23:55:20.366849000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367344 containerd[1734]: time="2025-07-06T23:55:20.366868400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367344 containerd[1734]: time="2025-07-06T23:55:20.366887200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367344 containerd[1734]: time="2025-07-06T23:55:20.366904000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367344 containerd[1734]: time="2025-07-06T23:55:20.366923400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367344 containerd[1734]: time="2025-07-06T23:55:20.366941100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367344 containerd[1734]: time="2025-07-06T23:55:20.366975500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367344 containerd[1734]: time="2025-07-06T23:55:20.366995800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367344 containerd[1734]: time="2025-07-06T23:55:20.367016300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367344 containerd[1734]: time="2025-07-06T23:55:20.367035100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367344 containerd[1734]: time="2025-07-06T23:55:20.367068900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367344 containerd[1734]: time="2025-07-06T23:55:20.367096800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367344 containerd[1734]: time="2025-07-06T23:55:20.367121000Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:55:20.367344 containerd[1734]: time="2025-07-06T23:55:20.367149800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367861 containerd[1734]: time="2025-07-06T23:55:20.367166800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367861 containerd[1734]: time="2025-07-06T23:55:20.367182600Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:55:20.367861 containerd[1734]: time="2025-07-06T23:55:20.367256200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:55:20.367861 containerd[1734]: time="2025-07-06T23:55:20.367289600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:55:20.367861 containerd[1734]: time="2025-07-06T23:55:20.367389100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:55:20.367861 containerd[1734]: time="2025-07-06T23:55:20.367409200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:55:20.367861 containerd[1734]: time="2025-07-06T23:55:20.367424700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.367861 containerd[1734]: time="2025-07-06T23:55:20.367444800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:55:20.367861 containerd[1734]: time="2025-07-06T23:55:20.367475200Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:55:20.367861 containerd[1734]: time="2025-07-06T23:55:20.367490300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.368243 containerd[1734]: time="2025-07-06T23:55:20.367880800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:55:20.368243 containerd[1734]: time="2025-07-06T23:55:20.367962900Z" level=info msg="Connect containerd service" Jul 6 23:55:20.368243 containerd[1734]: time="2025-07-06T23:55:20.368008900Z" level=info msg="using legacy CRI server" Jul 6 23:55:20.368243 containerd[1734]: time="2025-07-06T23:55:20.368019400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:55:20.368243 containerd[1734]: time="2025-07-06T23:55:20.368154900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:55:20.371135 containerd[1734]: time="2025-07-06T23:55:20.370549400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:55:20.371632 containerd[1734]: time="2025-07-06T23:55:20.371430900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:55:20.371632 containerd[1734]: time="2025-07-06T23:55:20.371517900Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:55:20.376132 containerd[1734]: time="2025-07-06T23:55:20.371599400Z" level=info msg="Start subscribing containerd event" Jul 6 23:55:20.376483 containerd[1734]: time="2025-07-06T23:55:20.376437600Z" level=info msg="Start recovering state" Jul 6 23:55:20.377576 containerd[1734]: time="2025-07-06T23:55:20.377548300Z" level=info msg="Start event monitor" Jul 6 23:55:20.377668 containerd[1734]: time="2025-07-06T23:55:20.377587900Z" level=info msg="Start snapshots syncer" Jul 6 23:55:20.377668 containerd[1734]: time="2025-07-06T23:55:20.377605700Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:55:20.377668 containerd[1734]: time="2025-07-06T23:55:20.377616700Z" level=info msg="Start streaming server" Jul 6 23:55:20.381788 containerd[1734]: time="2025-07-06T23:55:20.377715800Z" level=info msg="containerd successfully booted in 0.122210s" Jul 6 23:55:20.377827 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:55:20.780701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:20.784958 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:20.786390 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:55:20.791021 systemd[1]: Startup finished in 829ms (firmware) + 25.732s (loader) + 1.168s (kernel) + 11.581s (initrd) + 11.339s (userspace) = 50.651s. Jul 6 23:55:21.116192 login[1819]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 6 23:55:21.118039 login[1820]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 6 23:55:21.131301 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:55:21.141834 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:55:21.145676 systemd-logind[1705]: New session 1 of user core. Jul 6 23:55:21.149368 systemd-logind[1705]: New session 2 of user core. Jul 6 23:55:21.175205 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:55:21.186791 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:55:21.193008 (systemd)[1855]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:55:21.394057 systemd[1855]: Queued start job for default target default.target. Jul 6 23:55:21.400280 systemd[1855]: Created slice app.slice - User Application Slice. Jul 6 23:55:21.400654 systemd[1855]: Reached target paths.target - Paths. Jul 6 23:55:21.400678 systemd[1855]: Reached target timers.target - Timers. Jul 6 23:55:21.404640 systemd[1855]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:55:21.424833 systemd[1855]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:55:21.424923 systemd[1855]: Reached target sockets.target - Sockets. Jul 6 23:55:21.424941 systemd[1855]: Reached target basic.target - Basic System. Jul 6 23:55:21.425567 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:55:21.426342 systemd[1855]: Reached target default.target - Main User Target. Jul 6 23:55:21.426392 systemd[1855]: Startup finished in 225ms. Jul 6 23:55:21.430648 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:55:21.431622 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:55:21.615063 kubelet[1844]: E0706 23:55:21.614974 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:21.618104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:21.618293 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:21.618751 systemd[1]: kubelet.service: Consumed 1.051s CPU time. Jul 6 23:55:21.647618 waagent[1826]: 2025-07-06T23:55:21.647432Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 6 23:55:21.659567 waagent[1826]: 2025-07-06T23:55:21.648062Z INFO Daemon Daemon OS: flatcar 4081.3.4 Jul 6 23:55:21.659567 waagent[1826]: 2025-07-06T23:55:21.650765Z INFO Daemon Daemon Python: 3.11.9 Jul 6 23:55:21.659567 waagent[1826]: 2025-07-06T23:55:21.657140Z INFO Daemon Daemon Run daemon Jul 6 23:55:21.659732 waagent[1826]: 2025-07-06T23:55:21.659659Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.4' Jul 6 23:55:21.675218 waagent[1826]: 2025-07-06T23:55:21.665023Z INFO Daemon Daemon Using waagent for provisioning Jul 6 23:55:21.675218 waagent[1826]: 2025-07-06T23:55:21.665330Z INFO Daemon Daemon Activate resource disk Jul 6 23:55:21.675218 waagent[1826]: 2025-07-06T23:55:21.665430Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 6 23:55:21.675218 waagent[1826]: 2025-07-06T23:55:21.671422Z INFO Daemon Daemon Found device: None Jul 6 23:55:21.675218 waagent[1826]: 2025-07-06T23:55:21.675016Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 6 23:55:21.714161 waagent[1826]: 2025-07-06T23:55:21.678947Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 6 23:55:21.714161 waagent[1826]: 2025-07-06T23:55:21.688176Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:55:21.714161 waagent[1826]: 2025-07-06T23:55:21.691408Z INFO Daemon Daemon Running default provisioning handler Jul 6 23:55:21.714161 waagent[1826]: 2025-07-06T23:55:21.709842Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 6 23:55:21.721040 waagent[1826]: 2025-07-06T23:55:21.720959Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 6 23:55:21.727014 waagent[1826]: 2025-07-06T23:55:21.726923Z INFO Daemon Daemon cloud-init is enabled: False Jul 6 23:55:21.733211 waagent[1826]: 2025-07-06T23:55:21.727769Z INFO Daemon Daemon Copying ovf-env.xml Jul 6 23:55:21.806928 waagent[1826]: 2025-07-06T23:55:21.806774Z INFO Daemon Daemon Successfully mounted dvd Jul 6 23:55:21.836601 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 6 23:55:21.840232 waagent[1826]: 2025-07-06T23:55:21.839940Z INFO Daemon Daemon Detect protocol endpoint Jul 6 23:55:21.844634 waagent[1826]: 2025-07-06T23:55:21.843597Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:55:21.844634 waagent[1826]: 2025-07-06T23:55:21.843816Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 6 23:55:21.844634 waagent[1826]: 2025-07-06T23:55:21.843898Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 6 23:55:21.844634 waagent[1826]: 2025-07-06T23:55:21.844096Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 6 23:55:21.844634 waagent[1826]: 2025-07-06T23:55:21.844181Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 6 23:55:21.885653 waagent[1826]: 2025-07-06T23:55:21.885578Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 6 23:55:21.895018 waagent[1826]: 2025-07-06T23:55:21.886157Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 6 23:55:21.895018 waagent[1826]: 2025-07-06T23:55:21.890749Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 6 23:55:22.081760 waagent[1826]: 2025-07-06T23:55:22.081600Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 6 23:55:22.085330 waagent[1826]: 2025-07-06T23:55:22.085243Z INFO Daemon Daemon Forcing an update of the goal state. Jul 6 23:55:22.091836 waagent[1826]: 2025-07-06T23:55:22.091773Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:55:22.109073 waagent[1826]: 2025-07-06T23:55:22.109004Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 6 23:55:22.124151 waagent[1826]: 2025-07-06T23:55:22.109850Z INFO Daemon Jul 6 23:55:22.124151 waagent[1826]: 2025-07-06T23:55:22.110424Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 8d516358-d451-4bcf-9424-9f819d2eb024 eTag: 3305646604935845419 source: Fabric] Jul 6 23:55:22.124151 waagent[1826]: 2025-07-06T23:55:22.111628Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 6 23:55:22.124151 waagent[1826]: 2025-07-06T23:55:22.112701Z INFO Daemon Jul 6 23:55:22.124151 waagent[1826]: 2025-07-06T23:55:22.113483Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:55:22.127354 waagent[1826]: 2025-07-06T23:55:22.127302Z INFO Daemon Daemon Downloading artifacts profile blob Jul 6 23:55:22.193155 waagent[1826]: 2025-07-06T23:55:22.193057Z INFO Daemon Downloaded certificate {'thumbprint': '648A8DBD843B734F4172D52E0C9EC629075FA246', 'hasPrivateKey': True} Jul 6 23:55:22.198348 waagent[1826]: 2025-07-06T23:55:22.198274Z INFO Daemon Fetch goal state completed Jul 6 23:55:22.205925 waagent[1826]: 2025-07-06T23:55:22.205871Z INFO Daemon Daemon Starting provisioning Jul 6 23:55:22.213484 waagent[1826]: 2025-07-06T23:55:22.206133Z INFO Daemon Daemon Handle ovf-env.xml. Jul 6 23:55:22.213484 waagent[1826]: 2025-07-06T23:55:22.207475Z INFO Daemon Daemon Set hostname [ci-4081.3.4-a-6a836f1a00] Jul 6 23:55:22.215017 waagent[1826]: 2025-07-06T23:55:22.214950Z INFO Daemon Daemon Publish hostname [ci-4081.3.4-a-6a836f1a00] Jul 6 23:55:22.222955 waagent[1826]: 2025-07-06T23:55:22.215334Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 6 23:55:22.222955 waagent[1826]: 2025-07-06T23:55:22.216603Z INFO Daemon Daemon Primary interface is [eth0] Jul 6 23:55:22.240657 systemd-networkd[1350]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:22.240668 systemd-networkd[1350]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:22.240716 systemd-networkd[1350]: eth0: DHCP lease lost Jul 6 23:55:22.242045 waagent[1826]: 2025-07-06T23:55:22.241946Z INFO Daemon Daemon Create user account if not exists Jul 6 23:55:22.245015 waagent[1826]: 2025-07-06T23:55:22.244946Z INFO Daemon Daemon User core already exists, skip useradd Jul 6 23:55:22.249608 waagent[1826]: 2025-07-06T23:55:22.245154Z INFO Daemon Daemon Configure sudoer Jul 6 23:55:22.249608 waagent[1826]: 2025-07-06T23:55:22.246408Z INFO Daemon Daemon Configure sshd Jul 6 23:55:22.249608 waagent[1826]: 2025-07-06T23:55:22.247736Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 6 23:55:22.249608 waagent[1826]: 2025-07-06T23:55:22.248278Z INFO Daemon Daemon Deploy ssh public key. Jul 6 23:55:22.257846 systemd-networkd[1350]: eth0: DHCPv6 lease lost Jul 6 23:55:22.288554 systemd-networkd[1350]: eth0: DHCPv4 address 10.200.8.46/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 6 23:55:23.336093 waagent[1826]: 2025-07-06T23:55:23.336027Z INFO Daemon Daemon Provisioning complete Jul 6 23:55:23.348484 waagent[1826]: 2025-07-06T23:55:23.348401Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 6 23:55:23.364400 waagent[1826]: 2025-07-06T23:55:23.350183Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 6 23:55:23.364400 waagent[1826]: 2025-07-06T23:55:23.357312Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 6 23:55:23.539904 waagent[1909]: 2025-07-06T23:55:23.539800Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 6 23:55:23.540306 waagent[1909]: 2025-07-06T23:55:23.539975Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.4 Jul 6 23:55:23.540306 waagent[1909]: 2025-07-06T23:55:23.540063Z INFO ExtHandler ExtHandler Python: 3.11.9 Jul 6 23:55:23.576728 waagent[1909]: 2025-07-06T23:55:23.576615Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 6 23:55:23.576966 waagent[1909]: 2025-07-06T23:55:23.576912Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:55:23.577063 waagent[1909]: 2025-07-06T23:55:23.577022Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:55:23.586287 waagent[1909]: 2025-07-06T23:55:23.586131Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:55:23.597168 waagent[1909]: 2025-07-06T23:55:23.597104Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 6 23:55:23.597753 waagent[1909]: 2025-07-06T23:55:23.597691Z INFO ExtHandler Jul 6 23:55:23.597877 waagent[1909]: 2025-07-06T23:55:23.597792Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1f0ce75a-805c-473c-868c-f7cd84d7d203 eTag: 3305646604935845419 source: Fabric] Jul 6 23:55:23.598173 waagent[1909]: 2025-07-06T23:55:23.598124Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 6 23:55:23.598785 waagent[1909]: 2025-07-06T23:55:23.598726Z INFO ExtHandler Jul 6 23:55:23.598855 waagent[1909]: 2025-07-06T23:55:23.598815Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:55:23.603393 waagent[1909]: 2025-07-06T23:55:23.603339Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 6 23:55:23.688201 waagent[1909]: 2025-07-06T23:55:23.688056Z INFO ExtHandler Downloaded certificate {'thumbprint': '648A8DBD843B734F4172D52E0C9EC629075FA246', 'hasPrivateKey': True} Jul 6 23:55:23.689451 waagent[1909]: 2025-07-06T23:55:23.689384Z INFO ExtHandler Fetch goal state completed Jul 6 23:55:23.704424 waagent[1909]: 2025-07-06T23:55:23.704337Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1909 Jul 6 23:55:23.704626 waagent[1909]: 2025-07-06T23:55:23.704575Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 6 23:55:23.706518 waagent[1909]: 2025-07-06T23:55:23.706442Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.4', '', 'Flatcar Container Linux by Kinvolk'] Jul 6 23:55:23.706883 waagent[1909]: 2025-07-06T23:55:23.706833Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 6 23:55:23.737918 waagent[1909]: 2025-07-06T23:55:23.737868Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 6 23:55:23.738145 waagent[1909]: 2025-07-06T23:55:23.738098Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 6 23:55:23.745146 waagent[1909]: 2025-07-06T23:55:23.745098Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 6 23:55:23.754914 systemd[1]: Reloading requested from client PID 1922 ('systemctl') (unit waagent.service)... Jul 6 23:55:23.754947 systemd[1]: Reloading... Jul 6 23:55:23.885521 zram_generator::config[1959]: No configuration found. Jul 6 23:55:24.006942 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:24.090028 systemd[1]: Reloading finished in 334 ms. Jul 6 23:55:24.119814 waagent[1909]: 2025-07-06T23:55:24.119698Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 6 23:55:24.127612 systemd[1]: Reloading requested from client PID 2013 ('systemctl') (unit waagent.service)... Jul 6 23:55:24.127629 systemd[1]: Reloading... Jul 6 23:55:24.212603 zram_generator::config[2050]: No configuration found. Jul 6 23:55:24.346453 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:24.430349 systemd[1]: Reloading finished in 302 ms. Jul 6 23:55:24.457483 waagent[1909]: 2025-07-06T23:55:24.456696Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 6 23:55:24.457483 waagent[1909]: 2025-07-06T23:55:24.456908Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 6 23:55:25.707811 waagent[1909]: 2025-07-06T23:55:25.707704Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 6 23:55:25.708525 waagent[1909]: 2025-07-06T23:55:25.708436Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 6 23:55:25.709302 waagent[1909]: 2025-07-06T23:55:25.709248Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 6 23:55:25.710065 waagent[1909]: 2025-07-06T23:55:25.710010Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 6 23:55:25.710211 waagent[1909]: 2025-07-06T23:55:25.710154Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:55:25.710655 waagent[1909]: 2025-07-06T23:55:25.710585Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 6 23:55:25.710728 waagent[1909]: 2025-07-06T23:55:25.710655Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 6 23:55:25.711119 waagent[1909]: 2025-07-06T23:55:25.711078Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:55:25.711205 waagent[1909]: 2025-07-06T23:55:25.711168Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:55:25.711374 waagent[1909]: 2025-07-06T23:55:25.711331Z INFO EnvHandler ExtHandler Configure routes Jul 6 23:55:25.711455 waagent[1909]: 2025-07-06T23:55:25.711417Z INFO EnvHandler ExtHandler Gateway:None Jul 6 23:55:25.711560 waagent[1909]: 2025-07-06T23:55:25.711521Z INFO EnvHandler ExtHandler Routes:None Jul 6 23:55:25.714487 waagent[1909]: 2025-07-06T23:55:25.712178Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 6 23:55:25.714487 waagent[1909]: 2025-07-06T23:55:25.712295Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 6 23:55:25.714743 waagent[1909]: 2025-07-06T23:55:25.714677Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:55:25.715079 waagent[1909]: 2025-07-06T23:55:25.715011Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 6 23:55:25.715347 waagent[1909]: 2025-07-06T23:55:25.715288Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 6 23:55:25.715347 waagent[1909]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 6 23:55:25.715347 waagent[1909]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 6 23:55:25.715347 waagent[1909]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 6 23:55:25.715347 waagent[1909]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:55:25.715347 waagent[1909]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:55:25.715347 waagent[1909]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:55:25.715685 waagent[1909]: 2025-07-06T23:55:25.715643Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 6 23:55:25.722240 waagent[1909]: 2025-07-06T23:55:25.722179Z INFO ExtHandler ExtHandler Jul 6 23:55:25.722356 waagent[1909]: 2025-07-06T23:55:25.722310Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5ab4e28b-8d1d-436c-a58a-0d37b248ecdd correlation 26edb6e8-7aa2-4841-bf72-c69d3426fd97 created: 2025-07-06T23:54:19.123894Z] Jul 6 23:55:25.723508 waagent[1909]: 2025-07-06T23:55:25.722862Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 6 23:55:25.724412 waagent[1909]: 2025-07-06T23:55:25.724362Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jul 6 23:55:25.765139 waagent[1909]: 2025-07-06T23:55:25.764885Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: FAEB7D22-9565-4473-8E4F-3E11833F100B;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 6 23:55:25.776267 waagent[1909]: 2025-07-06T23:55:25.776182Z INFO MonitorHandler ExtHandler Network interfaces: Jul 6 23:55:25.776267 waagent[1909]: Executing ['ip', '-a', '-o', 'link']: Jul 6 23:55:25.776267 waagent[1909]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 6 23:55:25.776267 waagent[1909]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2c:ea:f4 brd ff:ff:ff:ff:ff:ff Jul 6 23:55:25.776267 waagent[1909]: 3: enP63451s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2c:ea:f4 brd ff:ff:ff:ff:ff:ff\ altname enP63451p0s2 Jul 6 23:55:25.776267 waagent[1909]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 6 23:55:25.776267 waagent[1909]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 6 23:55:25.776267 waagent[1909]: 2: eth0 inet 10.200.8.46/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 6 23:55:25.776267 waagent[1909]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 6 23:55:25.776267 waagent[1909]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 6 23:55:25.776267 waagent[1909]: 2: eth0 inet6 fe80::7eed:8dff:fe2c:eaf4/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:55:25.776267 waagent[1909]: 3: enP63451s1 inet6 fe80::7eed:8dff:fe2c:eaf4/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:55:25.837735 waagent[1909]: 2025-07-06T23:55:25.837660Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 6 23:55:25.837735 waagent[1909]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:25.837735 waagent[1909]: pkts bytes target prot opt in out source destination Jul 6 23:55:25.837735 waagent[1909]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:25.837735 waagent[1909]: pkts bytes target prot opt in out source destination Jul 6 23:55:25.837735 waagent[1909]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:25.837735 waagent[1909]: pkts bytes target prot opt in out source destination Jul 6 23:55:25.837735 waagent[1909]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:55:25.837735 waagent[1909]: 6 519 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:55:25.837735 waagent[1909]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:55:25.842386 waagent[1909]: 2025-07-06T23:55:25.842315Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 6 23:55:25.842386 waagent[1909]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:25.842386 waagent[1909]: pkts bytes target prot opt in out source destination Jul 6 23:55:25.842386 waagent[1909]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:25.842386 waagent[1909]: pkts bytes target prot opt in out source destination Jul 6 23:55:25.842386 waagent[1909]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:25.842386 waagent[1909]: pkts bytes target prot opt in out source destination Jul 6 23:55:25.842386 waagent[1909]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:55:25.842386 waagent[1909]: 14 1460 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:55:25.842386 waagent[1909]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:55:25.843647 waagent[1909]: 2025-07-06T23:55:25.842682Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 6 23:55:31.869016 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:55:31.874757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:31.991346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:31.996650 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:32.671904 kubelet[2143]: E0706 23:55:32.671839 2143 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:32.676254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:32.676502 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:35.795064 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:55:35.800785 systemd[1]: Started sshd@0-10.200.8.46:22-10.200.16.10:60180.service - OpenSSH per-connection server daemon (10.200.16.10:60180). Jul 6 23:55:36.472926 sshd[2151]: Accepted publickey for core from 10.200.16.10 port 60180 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:36.474411 sshd[2151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:36.479106 systemd-logind[1705]: New session 3 of user core. Jul 6 23:55:36.488669 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:55:37.022701 systemd[1]: Started sshd@1-10.200.8.46:22-10.200.16.10:60194.service - OpenSSH per-connection server daemon (10.200.16.10:60194). Jul 6 23:55:37.667568 sshd[2156]: Accepted publickey for core from 10.200.16.10 port 60194 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:37.669098 sshd[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:37.675111 systemd-logind[1705]: New session 4 of user core. Jul 6 23:55:37.681686 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:55:38.123939 sshd[2156]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:38.128966 systemd[1]: sshd@1-10.200.8.46:22-10.200.16.10:60194.service: Deactivated successfully. Jul 6 23:55:38.131614 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:55:38.132428 systemd-logind[1705]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:55:38.133347 systemd-logind[1705]: Removed session 4. Jul 6 23:55:38.233611 systemd[1]: Started sshd@2-10.200.8.46:22-10.200.16.10:60202.service - OpenSSH per-connection server daemon (10.200.16.10:60202). Jul 6 23:55:38.866926 sshd[2163]: Accepted publickey for core from 10.200.16.10 port 60202 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:38.868448 sshd[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:38.873546 systemd-logind[1705]: New session 5 of user core. Jul 6 23:55:38.882863 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:55:39.319565 sshd[2163]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:39.322836 systemd[1]: sshd@2-10.200.8.46:22-10.200.16.10:60202.service: Deactivated successfully. Jul 6 23:55:39.325997 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:55:39.327957 systemd-logind[1705]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:55:39.329249 systemd-logind[1705]: Removed session 5. Jul 6 23:55:39.446442 systemd[1]: Started sshd@3-10.200.8.46:22-10.200.16.10:60206.service - OpenSSH per-connection server daemon (10.200.16.10:60206). Jul 6 23:55:40.072065 sshd[2170]: Accepted publickey for core from 10.200.16.10 port 60206 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:40.073605 sshd[2170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:40.078038 systemd-logind[1705]: New session 6 of user core. Jul 6 23:55:40.086657 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:55:40.519547 sshd[2170]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:40.523344 systemd[1]: sshd@3-10.200.8.46:22-10.200.16.10:60206.service: Deactivated successfully. Jul 6 23:55:40.525932 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:55:40.527735 systemd-logind[1705]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:55:40.528950 systemd-logind[1705]: Removed session 6. Jul 6 23:55:40.634861 systemd[1]: Started sshd@4-10.200.8.46:22-10.200.16.10:37832.service - OpenSSH per-connection server daemon (10.200.16.10:37832). Jul 6 23:55:41.265722 sshd[2177]: Accepted publickey for core from 10.200.16.10 port 37832 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:41.267772 sshd[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:41.272315 systemd-logind[1705]: New session 7 of user core. Jul 6 23:55:41.278634 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:55:41.810074 sudo[2180]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:55:41.810455 sudo[2180]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:41.852049 sudo[2180]: pam_unix(sudo:session): session closed for user root Jul 6 23:55:41.955874 sshd[2177]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:41.959844 systemd[1]: sshd@4-10.200.8.46:22-10.200.16.10:37832.service: Deactivated successfully. Jul 6 23:55:41.962177 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:55:41.964111 systemd-logind[1705]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:55:41.965138 systemd-logind[1705]: Removed session 7. Jul 6 23:55:42.078332 systemd[1]: Started sshd@5-10.200.8.46:22-10.200.16.10:37846.service - OpenSSH per-connection server daemon (10.200.16.10:37846). Jul 6 23:55:42.705363 sshd[2185]: Accepted publickey for core from 10.200.16.10 port 37846 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:42.707276 sshd[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:42.708270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:55:42.715711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:42.719363 systemd-logind[1705]: New session 8 of user core. Jul 6 23:55:42.725233 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:55:42.753457 chronyd[1691]: Selected source PHC0 Jul 6 23:55:42.840782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:42.846093 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:43.052347 sudo[2202]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:55:43.053342 sudo[2202]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:43.056833 sudo[2202]: pam_unix(sudo:session): session closed for user root Jul 6 23:55:43.062847 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:55:43.063205 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:43.075816 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:55:43.079072 auditctl[2205]: No rules Jul 6 23:55:43.079457 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:55:43.079690 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:55:43.082714 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:55:43.537364 augenrules[2223]: No rules Jul 6 23:55:43.538991 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:55:43.540457 sudo[2201]: pam_unix(sudo:session): session closed for user root Jul 6 23:55:43.574564 kubelet[2196]: E0706 23:55:43.574511 2196 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:43.577663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:43.577846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:43.641730 sshd[2185]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:43.644974 systemd[1]: sshd@5-10.200.8.46:22-10.200.16.10:37846.service: Deactivated successfully. Jul 6 23:55:43.647023 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:55:43.648637 systemd-logind[1705]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:55:43.649904 systemd-logind[1705]: Removed session 8. Jul 6 23:55:43.751926 systemd[1]: Started sshd@6-10.200.8.46:22-10.200.16.10:37854.service - OpenSSH per-connection server daemon (10.200.16.10:37854). Jul 6 23:55:44.395022 sshd[2233]: Accepted publickey for core from 10.200.16.10 port 37854 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:44.396777 sshd[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:44.401865 systemd-logind[1705]: New session 9 of user core. Jul 6 23:55:44.412669 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:55:44.741808 sudo[2236]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:55:44.742260 sudo[2236]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:45.874829 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:55:45.876911 (dockerd)[2251]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:55:47.247202 dockerd[2251]: time="2025-07-06T23:55:47.247133681Z" level=info msg="Starting up" Jul 6 23:55:47.654816 dockerd[2251]: time="2025-07-06T23:55:47.654754281Z" level=info msg="Loading containers: start." Jul 6 23:55:47.819518 kernel: Initializing XFRM netlink socket Jul 6 23:55:47.941596 systemd-networkd[1350]: docker0: Link UP Jul 6 23:55:47.972036 dockerd[2251]: time="2025-07-06T23:55:47.971991481Z" level=info msg="Loading containers: done." Jul 6 23:55:48.035286 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2234676022-merged.mount: Deactivated successfully. Jul 6 23:55:48.044256 dockerd[2251]: time="2025-07-06T23:55:48.044166281Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:55:48.044407 dockerd[2251]: time="2025-07-06T23:55:48.044337781Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:55:48.044593 dockerd[2251]: time="2025-07-06T23:55:48.044565981Z" level=info msg="Daemon has completed initialization" Jul 6 23:55:48.103782 dockerd[2251]: time="2025-07-06T23:55:48.103714981Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:55:48.104561 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:55:49.228885 containerd[1734]: time="2025-07-06T23:55:49.228845481Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:55:49.964630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1814658383.mount: Deactivated successfully. Jul 6 23:55:51.740964 containerd[1734]: time="2025-07-06T23:55:51.740870913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:51.744404 containerd[1734]: time="2025-07-06T23:55:51.744169502Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077752" Jul 6 23:55:51.747426 containerd[1734]: time="2025-07-06T23:55:51.746997635Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:51.752800 containerd[1734]: time="2025-07-06T23:55:51.752750914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:51.753904 containerd[1734]: time="2025-07-06T23:55:51.753861645Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.524974362s" Jul 6 23:55:51.754089 containerd[1734]: time="2025-07-06T23:55:51.754065569Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 6 23:55:51.754967 containerd[1734]: time="2025-07-06T23:55:51.754874764Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:55:53.384863 containerd[1734]: time="2025-07-06T23:55:53.384782014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:53.387198 containerd[1734]: time="2025-07-06T23:55:53.387116844Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713302" Jul 6 23:55:53.390793 containerd[1734]: time="2025-07-06T23:55:53.390730390Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:53.396246 containerd[1734]: time="2025-07-06T23:55:53.396200759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:53.397696 containerd[1734]: time="2025-07-06T23:55:53.397526476Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.642429387s" Jul 6 23:55:53.397696 containerd[1734]: time="2025-07-06T23:55:53.397572076Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 6 23:55:53.398397 containerd[1734]: time="2025-07-06T23:55:53.398364087Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:55:53.808648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:55:53.814335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:53.962919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:53.977918 (kubelet)[2453]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:54.023820 kubelet[2453]: E0706 23:55:54.023763 2453 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:54.026605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:54.026798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:55.443760 containerd[1734]: time="2025-07-06T23:55:55.443695260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:55.446292 containerd[1734]: time="2025-07-06T23:55:55.446223492Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783679" Jul 6 23:55:55.451971 containerd[1734]: time="2025-07-06T23:55:55.451925565Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:55.457833 containerd[1734]: time="2025-07-06T23:55:55.457478835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:55.458567 containerd[1734]: time="2025-07-06T23:55:55.458528049Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 2.060123962s" Jul 6 23:55:55.458657 containerd[1734]: time="2025-07-06T23:55:55.458572049Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 6 23:55:55.459733 containerd[1734]: time="2025-07-06T23:55:55.459691763Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:55:56.652255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3017149093.mount: Deactivated successfully. Jul 6 23:55:57.263879 containerd[1734]: time="2025-07-06T23:55:57.263758473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:57.267201 containerd[1734]: time="2025-07-06T23:55:57.266989114Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jul 6 23:55:57.270798 containerd[1734]: time="2025-07-06T23:55:57.270720362Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:57.275723 containerd[1734]: time="2025-07-06T23:55:57.275641024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:57.276795 containerd[1734]: time="2025-07-06T23:55:57.276318533Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.816589069s" Jul 6 23:55:57.276795 containerd[1734]: time="2025-07-06T23:55:57.276363934Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 6 23:55:57.277093 containerd[1734]: time="2025-07-06T23:55:57.277068242Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:55:57.751847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2777401132.mount: Deactivated successfully. Jul 6 23:55:59.173721 containerd[1734]: time="2025-07-06T23:55:59.173654122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:59.176151 containerd[1734]: time="2025-07-06T23:55:59.176033452Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 6 23:55:59.180492 containerd[1734]: time="2025-07-06T23:55:59.179552096Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:59.185836 containerd[1734]: time="2025-07-06T23:55:59.185720275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:59.189744 containerd[1734]: time="2025-07-06T23:55:59.187731700Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.910624757s" Jul 6 23:55:59.189744 containerd[1734]: time="2025-07-06T23:55:59.187792701Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:55:59.192646 containerd[1734]: time="2025-07-06T23:55:59.192612762Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:55:59.663117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2572884282.mount: Deactivated successfully. Jul 6 23:55:59.690282 containerd[1734]: time="2025-07-06T23:55:59.690144373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:59.692790 containerd[1734]: time="2025-07-06T23:55:59.692718506Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 6 23:55:59.697399 containerd[1734]: time="2025-07-06T23:55:59.697339364Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:59.703422 containerd[1734]: time="2025-07-06T23:55:59.703382241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:59.704843 containerd[1734]: time="2025-07-06T23:55:59.704127950Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 511.240285ms" Jul 6 23:55:59.704843 containerd[1734]: time="2025-07-06T23:55:59.704167651Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:55:59.704843 containerd[1734]: time="2025-07-06T23:55:59.704665457Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:56:00.328246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031946881.mount: Deactivated successfully. Jul 6 23:56:02.751338 containerd[1734]: time="2025-07-06T23:56:02.751275202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:02.753905 containerd[1734]: time="2025-07-06T23:56:02.753844134Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jul 6 23:56:02.769193 containerd[1734]: time="2025-07-06T23:56:02.769137128Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:02.774618 containerd[1734]: time="2025-07-06T23:56:02.774549497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:02.776223 containerd[1734]: time="2025-07-06T23:56:02.775712512Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.071015555s" Jul 6 23:56:02.776223 containerd[1734]: time="2025-07-06T23:56:02.775758212Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 6 23:56:02.985618 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 6 23:56:04.061786 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 6 23:56:04.071834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:04.716657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:04.718164 (kubelet)[2611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:56:04.722488 update_engine[1707]: I20250706 23:56:04.719570 1707 update_attempter.cc:509] Updating boot flags... Jul 6 23:56:04.899229 kubelet[2611]: E0706 23:56:04.899171 2611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:56:04.904526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:56:04.908622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:56:04.954490 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2630) Jul 6 23:56:05.395499 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2632) Jul 6 23:56:07.171634 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:07.180014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:07.211335 systemd[1]: Reloading requested from client PID 2691 ('systemctl') (unit session-9.scope)... Jul 6 23:56:07.211351 systemd[1]: Reloading... Jul 6 23:56:07.319525 zram_generator::config[2731]: No configuration found. Jul 6 23:56:07.459015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:56:07.554259 systemd[1]: Reloading finished in 342 ms. Jul 6 23:56:07.604884 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:56:07.605073 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:56:07.605377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:07.608175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:08.555114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:08.568836 (kubelet)[2800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:56:08.757886 kubelet[2800]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:08.757886 kubelet[2800]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:56:08.757886 kubelet[2800]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:08.758375 kubelet[2800]: I0706 23:56:08.757954 2800 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:56:09.159118 kubelet[2800]: I0706 23:56:09.159072 2800 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:56:09.159118 kubelet[2800]: I0706 23:56:09.159103 2800 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:56:09.159486 kubelet[2800]: I0706 23:56:09.159448 2800 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:56:09.181562 kubelet[2800]: E0706 23:56:09.181513 2800 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:09.182683 kubelet[2800]: I0706 23:56:09.182512 2800 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:56:09.187953 kubelet[2800]: E0706 23:56:09.187908 2800 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:56:09.187953 kubelet[2800]: I0706 23:56:09.187952 2800 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:56:09.194869 kubelet[2800]: I0706 23:56:09.194585 2800 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:56:09.195403 kubelet[2800]: I0706 23:56:09.195377 2800 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:56:09.195589 kubelet[2800]: I0706 23:56:09.195558 2800 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:56:09.195797 kubelet[2800]: I0706 23:56:09.195588 2800 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-a-6a836f1a00","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:56:09.195944 kubelet[2800]: I0706 23:56:09.195815 2800 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:56:09.195944 kubelet[2800]: I0706 23:56:09.195829 2800 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:56:09.196022 kubelet[2800]: I0706 23:56:09.195965 2800 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:09.198748 kubelet[2800]: I0706 23:56:09.198720 2800 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:56:09.198748 kubelet[2800]: I0706 23:56:09.198752 2800 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:56:09.198876 kubelet[2800]: I0706 23:56:09.198794 2800 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:56:09.198876 kubelet[2800]: I0706 23:56:09.198817 2800 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:56:09.205122 kubelet[2800]: W0706 23:56:09.204542 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-6a836f1a00&limit=500&resourceVersion=0": dial tcp 10.200.8.46:6443: connect: connection refused Jul 6 23:56:09.205122 kubelet[2800]: E0706 23:56:09.204614 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-6a836f1a00&limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:09.205901 kubelet[2800]: W0706 23:56:09.205854 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.46:6443: connect: connection refused Jul 6 23:56:09.206031 kubelet[2800]: E0706 23:56:09.206011 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:09.206212 kubelet[2800]: I0706 23:56:09.206195 2800 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:56:09.206779 kubelet[2800]: I0706 23:56:09.206759 2800 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:56:09.206920 kubelet[2800]: W0706 23:56:09.206909 2800 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:56:09.209309 kubelet[2800]: I0706 23:56:09.209288 2800 server.go:1274] "Started kubelet" Jul 6 23:56:09.210504 kubelet[2800]: I0706 23:56:09.209427 2800 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:56:09.210504 kubelet[2800]: I0706 23:56:09.210456 2800 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:56:09.215982 kubelet[2800]: I0706 23:56:09.215106 2800 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:56:09.215982 kubelet[2800]: I0706 23:56:09.215451 2800 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:56:09.215982 kubelet[2800]: I0706 23:56:09.215710 2800 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:56:09.217923 kubelet[2800]: E0706 23:56:09.215904 2800 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.46:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.46:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-a-6a836f1a00.184fcecf28d8b857 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-a-6a836f1a00,UID:ci-4081.3.4-a-6a836f1a00,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-a-6a836f1a00,},FirstTimestamp:2025-07-06 23:56:09.209256023 +0000 UTC m=+0.637064983,LastTimestamp:2025-07-06 23:56:09.209256023 +0000 UTC m=+0.637064983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-a-6a836f1a00,}" Jul 6 23:56:09.219424 kubelet[2800]: I0706 23:56:09.219399 2800 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:56:09.223066 kubelet[2800]: E0706 23:56:09.222123 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:09.223066 kubelet[2800]: I0706 23:56:09.222163 2800 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:56:09.223066 kubelet[2800]: I0706 23:56:09.222389 2800 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:56:09.223066 kubelet[2800]: I0706 23:56:09.222442 2800 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:56:09.223261 kubelet[2800]: W0706 23:56:09.223181 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.46:6443: connect: connection refused Jul 6 23:56:09.223261 kubelet[2800]: E0706 23:56:09.223240 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:09.224009 kubelet[2800]: I0706 23:56:09.223985 2800 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:56:09.224106 kubelet[2800]: I0706 23:56:09.224085 2800 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:56:09.225371 kubelet[2800]: E0706 23:56:09.225320 2800 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:56:09.225538 kubelet[2800]: I0706 23:56:09.225519 2800 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:56:09.232990 kubelet[2800]: E0706 23:56:09.232953 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-6a836f1a00?timeout=10s\": dial tcp 10.200.8.46:6443: connect: connection refused" interval="200ms" Jul 6 23:56:09.253486 kubelet[2800]: I0706 23:56:09.253307 2800 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:56:09.255775 kubelet[2800]: I0706 23:56:09.255561 2800 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:56:09.255775 kubelet[2800]: I0706 23:56:09.255593 2800 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:56:09.255775 kubelet[2800]: I0706 23:56:09.255616 2800 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:56:09.255951 kubelet[2800]: E0706 23:56:09.255771 2800 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:56:09.257218 kubelet[2800]: W0706 23:56:09.256646 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.46:6443: connect: connection refused Jul 6 23:56:09.257218 kubelet[2800]: E0706 23:56:09.256966 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:09.277522 kubelet[2800]: I0706 23:56:09.277495 2800 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:56:09.277522 kubelet[2800]: I0706 23:56:09.277516 2800 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:56:09.277695 kubelet[2800]: I0706 23:56:09.277537 2800 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:09.282434 kubelet[2800]: I0706 23:56:09.282407 2800 policy_none.go:49] "None policy: Start" Jul 6 23:56:09.283262 kubelet[2800]: I0706 23:56:09.283031 2800 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:56:09.283262 kubelet[2800]: I0706 23:56:09.283055 2800 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:56:09.304638 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:56:09.321454 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:56:09.322691 kubelet[2800]: E0706 23:56:09.322306 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:09.325145 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:56:09.332784 kubelet[2800]: I0706 23:56:09.332196 2800 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:56:09.332784 kubelet[2800]: I0706 23:56:09.332417 2800 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:56:09.332784 kubelet[2800]: I0706 23:56:09.332434 2800 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:56:09.332784 kubelet[2800]: I0706 23:56:09.332673 2800 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:56:09.334834 kubelet[2800]: E0706 23:56:09.334811 2800 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:09.372705 systemd[1]: Created slice kubepods-burstable-podfef9e045cc538a02d7efa742e31fcae0.slice - libcontainer container kubepods-burstable-podfef9e045cc538a02d7efa742e31fcae0.slice. Jul 6 23:56:09.384557 systemd[1]: Created slice kubepods-burstable-pod9576ac2ad659eeb8bef49f0b3ad52939.slice - libcontainer container kubepods-burstable-pod9576ac2ad659eeb8bef49f0b3ad52939.slice. Jul 6 23:56:09.389004 systemd[1]: Created slice kubepods-burstable-pod2e38afcf4b4df41e4cc7a444467eef41.slice - libcontainer container kubepods-burstable-pod2e38afcf4b4df41e4cc7a444467eef41.slice. Jul 6 23:56:09.433758 kubelet[2800]: E0706 23:56:09.433627 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-6a836f1a00?timeout=10s\": dial tcp 10.200.8.46:6443: connect: connection refused" interval="400ms" Jul 6 23:56:09.435804 kubelet[2800]: I0706 23:56:09.435759 2800 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:09.436295 kubelet[2800]: E0706 23:56:09.436265 2800 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.46:6443/api/v1/nodes\": dial tcp 10.200.8.46:6443: connect: connection refused" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:09.523845 kubelet[2800]: I0706 23:56:09.523775 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e38afcf4b4df41e4cc7a444467eef41-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-a-6a836f1a00\" (UID: \"2e38afcf4b4df41e4cc7a444467eef41\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:09.523845 kubelet[2800]: I0706 23:56:09.523836 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fef9e045cc538a02d7efa742e31fcae0-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-6a836f1a00\" (UID: \"fef9e045cc538a02d7efa742e31fcae0\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:09.524085 kubelet[2800]: I0706 23:56:09.523872 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9576ac2ad659eeb8bef49f0b3ad52939-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-a-6a836f1a00\" (UID: \"9576ac2ad659eeb8bef49f0b3ad52939\") " pod="kube-system/kube-scheduler-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:09.524085 kubelet[2800]: I0706 23:56:09.523896 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e38afcf4b4df41e4cc7a444467eef41-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-a-6a836f1a00\" (UID: \"2e38afcf4b4df41e4cc7a444467eef41\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:09.524085 kubelet[2800]: I0706 23:56:09.523925 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fef9e045cc538a02d7efa742e31fcae0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-a-6a836f1a00\" (UID: \"fef9e045cc538a02d7efa742e31fcae0\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:09.524085 kubelet[2800]: I0706 23:56:09.523952 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e38afcf4b4df41e4cc7a444467eef41-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-a-6a836f1a00\" (UID: \"2e38afcf4b4df41e4cc7a444467eef41\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:09.524085 kubelet[2800]: I0706 23:56:09.523978 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fef9e045cc538a02d7efa742e31fcae0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-a-6a836f1a00\" (UID: \"fef9e045cc538a02d7efa742e31fcae0\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:09.524305 kubelet[2800]: I0706 23:56:09.524004 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fef9e045cc538a02d7efa742e31fcae0-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-6a836f1a00\" (UID: \"fef9e045cc538a02d7efa742e31fcae0\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:09.524305 kubelet[2800]: I0706 23:56:09.524032 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fef9e045cc538a02d7efa742e31fcae0-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-a-6a836f1a00\" (UID: \"fef9e045cc538a02d7efa742e31fcae0\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:09.640517 kubelet[2800]: I0706 23:56:09.640456 2800 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:09.640875 kubelet[2800]: E0706 23:56:09.640842 2800 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.46:6443/api/v1/nodes\": dial tcp 10.200.8.46:6443: connect: connection refused" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:09.681926 containerd[1734]: time="2025-07-06T23:56:09.681878320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-a-6a836f1a00,Uid:fef9e045cc538a02d7efa742e31fcae0,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:09.689248 containerd[1734]: time="2025-07-06T23:56:09.689154712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-a-6a836f1a00,Uid:9576ac2ad659eeb8bef49f0b3ad52939,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:09.693749 containerd[1734]: time="2025-07-06T23:56:09.692908360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-a-6a836f1a00,Uid:2e38afcf4b4df41e4cc7a444467eef41,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:09.834609 kubelet[2800]: E0706 23:56:09.834555 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-6a836f1a00?timeout=10s\": dial tcp 10.200.8.46:6443: connect: connection refused" interval="800ms" Jul 6 23:56:10.045262 kubelet[2800]: I0706 23:56:10.045135 2800 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:10.045692 kubelet[2800]: E0706 23:56:10.045595 2800 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.46:6443/api/v1/nodes\": dial tcp 10.200.8.46:6443: connect: connection refused" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:10.165029 kubelet[2800]: W0706 23:56:10.164950 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-6a836f1a00&limit=500&resourceVersion=0": dial tcp 10.200.8.46:6443: connect: connection refused Jul 6 23:56:10.165029 kubelet[2800]: E0706 23:56:10.165038 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-6a836f1a00&limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:10.262695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount527624713.mount: Deactivated successfully. Jul 6 23:56:10.294097 containerd[1734]: time="2025-07-06T23:56:10.294030586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:56:10.297306 containerd[1734]: time="2025-07-06T23:56:10.297159326Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 6 23:56:10.304537 containerd[1734]: time="2025-07-06T23:56:10.304485119Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:56:10.310255 containerd[1734]: time="2025-07-06T23:56:10.309942988Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:56:10.314919 containerd[1734]: time="2025-07-06T23:56:10.314836450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:56:10.319283 containerd[1734]: time="2025-07-06T23:56:10.319240706Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:56:10.322312 containerd[1734]: time="2025-07-06T23:56:10.322256944Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:56:10.328736 containerd[1734]: time="2025-07-06T23:56:10.328685026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:56:10.329736 containerd[1734]: time="2025-07-06T23:56:10.329449835Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 640.215822ms" Jul 6 23:56:10.331502 containerd[1734]: time="2025-07-06T23:56:10.331449961Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 638.476001ms" Jul 6 23:56:10.331971 containerd[1734]: time="2025-07-06T23:56:10.331944067Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 649.985546ms" Jul 6 23:56:10.492654 kubelet[2800]: W0706 23:56:10.492587 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.46:6443: connect: connection refused Jul 6 23:56:10.492816 kubelet[2800]: E0706 23:56:10.492663 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:10.635299 kubelet[2800]: E0706 23:56:10.635178 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-6a836f1a00?timeout=10s\": dial tcp 10.200.8.46:6443: connect: connection refused" interval="1.6s" Jul 6 23:56:10.774617 kubelet[2800]: W0706 23:56:10.774545 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.46:6443: connect: connection refused Jul 6 23:56:10.774617 kubelet[2800]: E0706 23:56:10.774623 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:10.778207 kubelet[2800]: W0706 23:56:10.778154 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.46:6443: connect: connection refused Jul 6 23:56:10.778315 kubelet[2800]: E0706 23:56:10.778215 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:10.834120 containerd[1734]: time="2025-07-06T23:56:10.833805634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:10.835354 containerd[1734]: time="2025-07-06T23:56:10.834666745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:10.835354 containerd[1734]: time="2025-07-06T23:56:10.834820347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:10.836093 containerd[1734]: time="2025-07-06T23:56:10.835644857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:10.838614 containerd[1734]: time="2025-07-06T23:56:10.838039788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:10.838614 containerd[1734]: time="2025-07-06T23:56:10.838111289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:10.838614 containerd[1734]: time="2025-07-06T23:56:10.838133389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:10.838614 containerd[1734]: time="2025-07-06T23:56:10.838218090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:10.839535 containerd[1734]: time="2025-07-06T23:56:10.839390205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:10.839923 containerd[1734]: time="2025-07-06T23:56:10.839802510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:10.841426 containerd[1734]: time="2025-07-06T23:56:10.840505419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:10.841740 containerd[1734]: time="2025-07-06T23:56:10.841628233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:10.850004 kubelet[2800]: I0706 23:56:10.849976 2800 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:10.853120 kubelet[2800]: E0706 23:56:10.852099 2800 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.46:6443/api/v1/nodes\": dial tcp 10.200.8.46:6443: connect: connection refused" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:10.876679 systemd[1]: Started cri-containerd-0f4288edf510431648d9c9eb2238cc42d12e5639acbad31641af00babb9acb0c.scope - libcontainer container 0f4288edf510431648d9c9eb2238cc42d12e5639acbad31641af00babb9acb0c. Jul 6 23:56:10.884967 systemd[1]: Started cri-containerd-3cae2dccc7d12d015790a6e797a71a0a53bb3835530bf79f42700cb3ba26c917.scope - libcontainer container 3cae2dccc7d12d015790a6e797a71a0a53bb3835530bf79f42700cb3ba26c917. Jul 6 23:56:10.887425 systemd[1]: Started cri-containerd-c835c7b9859973ca0d32c753857c9b8af7eddbf85d549ed82792071c9f1b05ce.scope - libcontainer container c835c7b9859973ca0d32c753857c9b8af7eddbf85d549ed82792071c9f1b05ce. Jul 6 23:56:10.956862 containerd[1734]: time="2025-07-06T23:56:10.956798094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-a-6a836f1a00,Uid:2e38afcf4b4df41e4cc7a444467eef41,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f4288edf510431648d9c9eb2238cc42d12e5639acbad31641af00babb9acb0c\"" Jul 6 23:56:10.964358 containerd[1734]: time="2025-07-06T23:56:10.964134087Z" level=info msg="CreateContainer within sandbox \"0f4288edf510431648d9c9eb2238cc42d12e5639acbad31641af00babb9acb0c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:56:10.984941 containerd[1734]: time="2025-07-06T23:56:10.984894651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-a-6a836f1a00,Uid:9576ac2ad659eeb8bef49f0b3ad52939,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cae2dccc7d12d015790a6e797a71a0a53bb3835530bf79f42700cb3ba26c917\"" Jul 6 23:56:10.992638 containerd[1734]: time="2025-07-06T23:56:10.992579648Z" level=info msg="CreateContainer within sandbox \"3cae2dccc7d12d015790a6e797a71a0a53bb3835530bf79f42700cb3ba26c917\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:56:10.997208 containerd[1734]: time="2025-07-06T23:56:10.997158206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-a-6a836f1a00,Uid:fef9e045cc538a02d7efa742e31fcae0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c835c7b9859973ca0d32c753857c9b8af7eddbf85d549ed82792071c9f1b05ce\"" Jul 6 23:56:11.000196 containerd[1734]: time="2025-07-06T23:56:11.000154844Z" level=info msg="CreateContainer within sandbox \"c835c7b9859973ca0d32c753857c9b8af7eddbf85d549ed82792071c9f1b05ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:56:11.287218 kubelet[2800]: E0706 23:56:11.287100 2800 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:12.235875 kubelet[2800]: E0706 23:56:12.235784 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-6a836f1a00?timeout=10s\": dial tcp 10.200.8.46:6443: connect: connection refused" interval="3.2s" Jul 6 23:56:12.454826 kubelet[2800]: I0706 23:56:12.454787 2800 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:12.455239 kubelet[2800]: E0706 23:56:12.455195 2800 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.46:6443/api/v1/nodes\": dial tcp 10.200.8.46:6443: connect: connection refused" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:12.625643 kubelet[2800]: W0706 23:56:12.625592 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-6a836f1a00&limit=500&resourceVersion=0": dial tcp 10.200.8.46:6443: connect: connection refused Jul 6 23:56:12.625643 kubelet[2800]: E0706 23:56:12.625649 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-6a836f1a00&limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:12.938288 kubelet[2800]: W0706 23:56:12.938149 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.46:6443: connect: connection refused Jul 6 23:56:12.938288 kubelet[2800]: E0706 23:56:12.938208 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:13.474177 kubelet[2800]: W0706 23:56:13.474122 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.46:6443: connect: connection refused Jul 6 23:56:13.474669 kubelet[2800]: E0706 23:56:13.474186 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:13.708850 kubelet[2800]: W0706 23:56:13.708804 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.46:6443: connect: connection refused Jul 6 23:56:13.709109 kubelet[2800]: E0706 23:56:13.708860 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:13.786209 containerd[1734]: time="2025-07-06T23:56:13.785909487Z" level=info msg="CreateContainer within sandbox \"0f4288edf510431648d9c9eb2238cc42d12e5639acbad31641af00babb9acb0c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a8b9689d3348a76a7a1cc83c15eb1149b043e617c288432a2ddfbf647ac1c0d2\"" Jul 6 23:56:13.787934 containerd[1734]: time="2025-07-06T23:56:13.787894112Z" level=info msg="CreateContainer within sandbox \"3cae2dccc7d12d015790a6e797a71a0a53bb3835530bf79f42700cb3ba26c917\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"68cdc5f585aeaa382465319f688bc5b011ea5accce0fa9ef784b9b9ea35d165f\"" Jul 6 23:56:13.788175 containerd[1734]: time="2025-07-06T23:56:13.788148416Z" level=info msg="StartContainer for \"a8b9689d3348a76a7a1cc83c15eb1149b043e617c288432a2ddfbf647ac1c0d2\"" Jul 6 23:56:13.793733 containerd[1734]: time="2025-07-06T23:56:13.793686686Z" level=info msg="CreateContainer within sandbox \"c835c7b9859973ca0d32c753857c9b8af7eddbf85d549ed82792071c9f1b05ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a1232d975a0f08ee01c396c963b3807810c2f71260cc3e96a048c233d2fc1406\"" Jul 6 23:56:13.794342 containerd[1734]: time="2025-07-06T23:56:13.794309794Z" level=info msg="StartContainer for \"68cdc5f585aeaa382465319f688bc5b011ea5accce0fa9ef784b9b9ea35d165f\"" Jul 6 23:56:13.797065 containerd[1734]: time="2025-07-06T23:56:13.797023628Z" level=info msg="StartContainer for \"a1232d975a0f08ee01c396c963b3807810c2f71260cc3e96a048c233d2fc1406\"" Jul 6 23:56:13.856992 systemd[1]: Started cri-containerd-a1232d975a0f08ee01c396c963b3807810c2f71260cc3e96a048c233d2fc1406.scope - libcontainer container a1232d975a0f08ee01c396c963b3807810c2f71260cc3e96a048c233d2fc1406. Jul 6 23:56:13.861043 systemd[1]: Started cri-containerd-a8b9689d3348a76a7a1cc83c15eb1149b043e617c288432a2ddfbf647ac1c0d2.scope - libcontainer container a8b9689d3348a76a7a1cc83c15eb1149b043e617c288432a2ddfbf647ac1c0d2. Jul 6 23:56:13.873715 systemd[1]: Started cri-containerd-68cdc5f585aeaa382465319f688bc5b011ea5accce0fa9ef784b9b9ea35d165f.scope - libcontainer container 68cdc5f585aeaa382465319f688bc5b011ea5accce0fa9ef784b9b9ea35d165f. Jul 6 23:56:13.943746 containerd[1734]: time="2025-07-06T23:56:13.942432473Z" level=info msg="StartContainer for \"a8b9689d3348a76a7a1cc83c15eb1149b043e617c288432a2ddfbf647ac1c0d2\" returns successfully" Jul 6 23:56:13.964335 containerd[1734]: time="2025-07-06T23:56:13.964289850Z" level=info msg="StartContainer for \"a1232d975a0f08ee01c396c963b3807810c2f71260cc3e96a048c233d2fc1406\" returns successfully" Jul 6 23:56:14.007619 containerd[1734]: time="2025-07-06T23:56:14.007562499Z" level=info msg="StartContainer for \"68cdc5f585aeaa382465319f688bc5b011ea5accce0fa9ef784b9b9ea35d165f\" returns successfully" Jul 6 23:56:15.658454 kubelet[2800]: I0706 23:56:15.657880 2800 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:16.331680 kubelet[2800]: E0706 23:56:16.331625 2800 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.4-a-6a836f1a00\" not found" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:16.388235 kubelet[2800]: I0706 23:56:16.388189 2800 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:16.388235 kubelet[2800]: E0706 23:56:16.388242 2800 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.4-a-6a836f1a00\": node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:16.561257 kubelet[2800]: E0706 23:56:16.561196 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:16.662367 kubelet[2800]: E0706 23:56:16.662223 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:16.762898 kubelet[2800]: E0706 23:56:16.762844 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:16.863659 kubelet[2800]: E0706 23:56:16.863612 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:16.964240 kubelet[2800]: E0706 23:56:16.964100 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:17.064938 kubelet[2800]: E0706 23:56:17.064891 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:17.165740 kubelet[2800]: E0706 23:56:17.165697 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:17.266257 kubelet[2800]: E0706 23:56:17.266129 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:17.367268 kubelet[2800]: E0706 23:56:17.367221 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:17.467939 kubelet[2800]: E0706 23:56:17.467887 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:17.568118 kubelet[2800]: E0706 23:56:17.568068 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:17.669098 kubelet[2800]: E0706 23:56:17.668978 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:17.771890 kubelet[2800]: E0706 23:56:17.771832 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:17.873004 kubelet[2800]: E0706 23:56:17.872789 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:17.973786 kubelet[2800]: E0706 23:56:17.973736 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:18.073908 kubelet[2800]: E0706 23:56:18.073856 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:18.174621 kubelet[2800]: E0706 23:56:18.174496 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:18.275194 kubelet[2800]: E0706 23:56:18.275024 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:18.375661 kubelet[2800]: E0706 23:56:18.375618 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:18.476264 kubelet[2800]: E0706 23:56:18.476129 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:18.577204 kubelet[2800]: E0706 23:56:18.577151 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:18.678008 kubelet[2800]: E0706 23:56:18.677958 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:18.778423 kubelet[2800]: E0706 23:56:18.778299 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:19.167730 systemd[1]: Reloading requested from client PID 3078 ('systemctl') (unit session-9.scope)... Jul 6 23:56:19.167748 systemd[1]: Reloading... Jul 6 23:56:19.210773 kubelet[2800]: I0706 23:56:19.209962 2800 apiserver.go:52] "Watching apiserver" Jul 6 23:56:19.223433 kubelet[2800]: I0706 23:56:19.223399 2800 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:56:19.272537 zram_generator::config[3117]: No configuration found. Jul 6 23:56:19.425031 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:56:19.523946 systemd[1]: Reloading finished in 355 ms. Jul 6 23:56:19.567937 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:19.582949 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:56:19.583222 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:19.588790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:19.914109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:19.925838 (kubelet)[3188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:56:19.973515 kubelet[3188]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:19.973515 kubelet[3188]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:56:19.973515 kubelet[3188]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:19.973515 kubelet[3188]: I0706 23:56:19.972323 3188 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:56:19.979304 kubelet[3188]: I0706 23:56:19.979255 3188 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:56:19.979304 kubelet[3188]: I0706 23:56:19.979288 3188 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:56:19.979689 kubelet[3188]: I0706 23:56:19.979667 3188 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:56:19.981025 kubelet[3188]: I0706 23:56:19.980997 3188 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:56:19.983360 kubelet[3188]: I0706 23:56:19.982753 3188 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:56:19.985976 kubelet[3188]: E0706 23:56:19.985946 3188 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:56:19.985976 kubelet[3188]: I0706 23:56:19.985975 3188 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:56:19.990753 kubelet[3188]: I0706 23:56:19.990723 3188 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:56:19.992483 kubelet[3188]: I0706 23:56:19.990862 3188 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:56:19.992483 kubelet[3188]: I0706 23:56:19.990999 3188 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:56:19.992483 kubelet[3188]: I0706 23:56:19.991046 3188 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-a-6a836f1a00","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:56:19.992483 kubelet[3188]: I0706 23:56:19.991371 3188 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:56:19.992767 kubelet[3188]: I0706 23:56:19.991385 3188 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:56:19.992767 kubelet[3188]: I0706 23:56:19.991418 3188 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:19.992767 kubelet[3188]: I0706 23:56:19.991557 3188 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:56:19.992767 kubelet[3188]: I0706 23:56:19.991571 3188 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:56:19.992767 kubelet[3188]: I0706 23:56:19.991607 3188 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:56:19.992767 kubelet[3188]: I0706 23:56:19.991620 3188 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:56:19.997415 kubelet[3188]: I0706 23:56:19.997225 3188 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:56:19.997884 kubelet[3188]: I0706 23:56:19.997862 3188 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:56:19.998360 kubelet[3188]: I0706 23:56:19.998338 3188 server.go:1274] "Started kubelet" Jul 6 23:56:20.009508 kubelet[3188]: I0706 23:56:20.007091 3188 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:56:20.009877 kubelet[3188]: I0706 23:56:20.009855 3188 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:56:20.010912 kubelet[3188]: I0706 23:56:20.010891 3188 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:56:20.014577 kubelet[3188]: I0706 23:56:20.012072 3188 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:56:20.014921 kubelet[3188]: I0706 23:56:20.014894 3188 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:56:20.020092 kubelet[3188]: I0706 23:56:20.020073 3188 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:56:20.023863 kubelet[3188]: I0706 23:56:20.023834 3188 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:56:20.024444 kubelet[3188]: E0706 23:56:20.024421 3188 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-6a836f1a00\" not found" Jul 6 23:56:20.027541 kubelet[3188]: I0706 23:56:20.027267 3188 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:56:20.027541 kubelet[3188]: I0706 23:56:20.027456 3188 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:56:20.029452 kubelet[3188]: I0706 23:56:20.029420 3188 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:56:20.030419 kubelet[3188]: I0706 23:56:20.030380 3188 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:56:20.033523 kubelet[3188]: E0706 23:56:20.033502 3188 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:56:20.034077 kubelet[3188]: I0706 23:56:20.033821 3188 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:56:20.036398 kubelet[3188]: I0706 23:56:20.036378 3188 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:56:20.036688 kubelet[3188]: I0706 23:56:20.036673 3188 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:56:20.036838 kubelet[3188]: I0706 23:56:20.036777 3188 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:56:20.037079 kubelet[3188]: E0706 23:56:20.036830 3188 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:56:20.038347 kubelet[3188]: I0706 23:56:20.038231 3188 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:56:20.096379 kubelet[3188]: I0706 23:56:20.096353 3188 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:56:20.096541 kubelet[3188]: I0706 23:56:20.096508 3188 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:56:20.096541 kubelet[3188]: I0706 23:56:20.096533 3188 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:20.096727 kubelet[3188]: I0706 23:56:20.096704 3188 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:56:20.096784 kubelet[3188]: I0706 23:56:20.096721 3188 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:56:20.096784 kubelet[3188]: I0706 23:56:20.096747 3188 policy_none.go:49] "None policy: Start" Jul 6 23:56:20.097424 kubelet[3188]: I0706 23:56:20.097407 3188 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:56:20.098491 kubelet[3188]: I0706 23:56:20.097797 3188 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:56:20.098491 kubelet[3188]: I0706 23:56:20.097974 3188 state_mem.go:75] "Updated machine memory state" Jul 6 23:56:20.103160 kubelet[3188]: I0706 23:56:20.102270 3188 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:56:20.103160 kubelet[3188]: I0706 23:56:20.102449 3188 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:56:20.103160 kubelet[3188]: I0706 23:56:20.102476 3188 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:56:20.103385 kubelet[3188]: I0706 23:56:20.103210 3188 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:56:20.158104 kubelet[3188]: W0706 23:56:20.158038 3188 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:56:20.161489 kubelet[3188]: W0706 23:56:20.161443 3188 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:56:20.161819 kubelet[3188]: W0706 23:56:20.161637 3188 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:56:20.206066 kubelet[3188]: I0706 23:56:20.205942 3188 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:20.217674 kubelet[3188]: I0706 23:56:20.217631 3188 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:20.217828 kubelet[3188]: I0706 23:56:20.217725 3188 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:20.228990 kubelet[3188]: I0706 23:56:20.228937 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e38afcf4b4df41e4cc7a444467eef41-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-a-6a836f1a00\" (UID: \"2e38afcf4b4df41e4cc7a444467eef41\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:20.228990 kubelet[3188]: I0706 23:56:20.228986 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e38afcf4b4df41e4cc7a444467eef41-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-a-6a836f1a00\" (UID: \"2e38afcf4b4df41e4cc7a444467eef41\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:20.229372 kubelet[3188]: I0706 23:56:20.229012 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fef9e045cc538a02d7efa742e31fcae0-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-6a836f1a00\" (UID: \"fef9e045cc538a02d7efa742e31fcae0\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:20.229372 kubelet[3188]: I0706 23:56:20.229035 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fef9e045cc538a02d7efa742e31fcae0-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-6a836f1a00\" (UID: \"fef9e045cc538a02d7efa742e31fcae0\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:20.229372 kubelet[3188]: I0706 23:56:20.229056 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9576ac2ad659eeb8bef49f0b3ad52939-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-a-6a836f1a00\" (UID: \"9576ac2ad659eeb8bef49f0b3ad52939\") " pod="kube-system/kube-scheduler-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:20.229372 kubelet[3188]: I0706 23:56:20.229076 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e38afcf4b4df41e4cc7a444467eef41-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-a-6a836f1a00\" (UID: \"2e38afcf4b4df41e4cc7a444467eef41\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:20.229372 kubelet[3188]: I0706 23:56:20.229100 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fef9e045cc538a02d7efa742e31fcae0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-a-6a836f1a00\" (UID: \"fef9e045cc538a02d7efa742e31fcae0\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:20.229759 kubelet[3188]: I0706 23:56:20.229128 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fef9e045cc538a02d7efa742e31fcae0-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-a-6a836f1a00\" (UID: \"fef9e045cc538a02d7efa742e31fcae0\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:20.229759 kubelet[3188]: I0706 23:56:20.229157 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fef9e045cc538a02d7efa742e31fcae0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-a-6a836f1a00\" (UID: \"fef9e045cc538a02d7efa742e31fcae0\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:20.997680 kubelet[3188]: I0706 23:56:20.997606 3188 apiserver.go:52] "Watching apiserver" Jul 6 23:56:21.028456 kubelet[3188]: I0706 23:56:21.028270 3188 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:56:21.123235 kubelet[3188]: W0706 23:56:21.122698 3188 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:56:21.123235 kubelet[3188]: E0706 23:56:21.122781 3188 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.4-a-6a836f1a00\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-a-6a836f1a00" Jul 6 23:56:21.123990 kubelet[3188]: I0706 23:56:21.123939 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.4-a-6a836f1a00" podStartSLOduration=1.123900981 podStartE2EDuration="1.123900981s" podCreationTimestamp="2025-07-06 23:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:21.12383498 +0000 UTC m=+1.193430441" watchObservedRunningTime="2025-07-06 23:56:21.123900981 +0000 UTC m=+1.193496442" Jul 6 23:56:21.148957 kubelet[3188]: I0706 23:56:21.148743 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.4-a-6a836f1a00" podStartSLOduration=1.148708396 podStartE2EDuration="1.148708396s" podCreationTimestamp="2025-07-06 23:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:21.148094188 +0000 UTC m=+1.217689649" watchObservedRunningTime="2025-07-06 23:56:21.148708396 +0000 UTC m=+1.218303957" Jul 6 23:56:21.164609 kubelet[3188]: I0706 23:56:21.162458 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-6a836f1a00" podStartSLOduration=1.16242447 podStartE2EDuration="1.16242447s" podCreationTimestamp="2025-07-06 23:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:21.161278555 +0000 UTC m=+1.230874116" watchObservedRunningTime="2025-07-06 23:56:21.16242447 +0000 UTC m=+1.232020031" Jul 6 23:56:25.018240 kubelet[3188]: I0706 23:56:25.018192 3188 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:56:25.019510 containerd[1734]: time="2025-07-06T23:56:25.019010898Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:56:25.020832 kubelet[3188]: I0706 23:56:25.019662 3188 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:56:25.817997 systemd[1]: Created slice kubepods-besteffort-podd2aaa1fb_cb0b_46b5_b215_76618916460c.slice - libcontainer container kubepods-besteffort-podd2aaa1fb_cb0b_46b5_b215_76618916460c.slice. Jul 6 23:56:25.862975 kubelet[3188]: I0706 23:56:25.862926 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2aaa1fb-cb0b-46b5-b215-76618916460c-lib-modules\") pod \"kube-proxy-gm59h\" (UID: \"d2aaa1fb-cb0b-46b5-b215-76618916460c\") " pod="kube-system/kube-proxy-gm59h" Jul 6 23:56:25.862975 kubelet[3188]: I0706 23:56:25.862981 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d2aaa1fb-cb0b-46b5-b215-76618916460c-kube-proxy\") pod \"kube-proxy-gm59h\" (UID: \"d2aaa1fb-cb0b-46b5-b215-76618916460c\") " pod="kube-system/kube-proxy-gm59h" Jul 6 23:56:25.863177 kubelet[3188]: I0706 23:56:25.863007 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2aaa1fb-cb0b-46b5-b215-76618916460c-xtables-lock\") pod \"kube-proxy-gm59h\" (UID: \"d2aaa1fb-cb0b-46b5-b215-76618916460c\") " pod="kube-system/kube-proxy-gm59h" Jul 6 23:56:25.863177 kubelet[3188]: I0706 23:56:25.863027 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmp5h\" (UniqueName: \"kubernetes.io/projected/d2aaa1fb-cb0b-46b5-b215-76618916460c-kube-api-access-hmp5h\") pod \"kube-proxy-gm59h\" (UID: \"d2aaa1fb-cb0b-46b5-b215-76618916460c\") " pod="kube-system/kube-proxy-gm59h" Jul 6 23:56:26.130728 containerd[1734]: time="2025-07-06T23:56:26.130556300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gm59h,Uid:d2aaa1fb-cb0b-46b5-b215-76618916460c,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:26.199398 containerd[1734]: time="2025-07-06T23:56:26.197938855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:26.199398 containerd[1734]: time="2025-07-06T23:56:26.198017156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:26.199398 containerd[1734]: time="2025-07-06T23:56:26.198033556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:26.199658 containerd[1734]: time="2025-07-06T23:56:26.198136657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:26.253726 systemd[1]: Started cri-containerd-b457aab213e07bb8ef81dd94ab578200bcac5ebaac41d3cbf0bd439c28eddb2e.scope - libcontainer container b457aab213e07bb8ef81dd94ab578200bcac5ebaac41d3cbf0bd439c28eddb2e. Jul 6 23:56:26.262111 systemd[1]: Created slice kubepods-besteffort-pode81e8c68_bdb3_405b_b5d5_ef1f356cc0e9.slice - libcontainer container kubepods-besteffort-pode81e8c68_bdb3_405b_b5d5_ef1f356cc0e9.slice. Jul 6 23:56:26.291453 containerd[1734]: time="2025-07-06T23:56:26.291405541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gm59h,Uid:d2aaa1fb-cb0b-46b5-b215-76618916460c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b457aab213e07bb8ef81dd94ab578200bcac5ebaac41d3cbf0bd439c28eddb2e\"" Jul 6 23:56:26.296103 containerd[1734]: time="2025-07-06T23:56:26.296028999Z" level=info msg="CreateContainer within sandbox \"b457aab213e07bb8ef81dd94ab578200bcac5ebaac41d3cbf0bd439c28eddb2e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:56:26.334793 containerd[1734]: time="2025-07-06T23:56:26.334732690Z" level=info msg="CreateContainer within sandbox \"b457aab213e07bb8ef81dd94ab578200bcac5ebaac41d3cbf0bd439c28eddb2e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00d36a7c301f83259fba7289e54b9f201f17404add0faa454c7bb40026792bd3\"" Jul 6 23:56:26.337350 containerd[1734]: time="2025-07-06T23:56:26.335912505Z" level=info msg="StartContainer for \"00d36a7c301f83259fba7289e54b9f201f17404add0faa454c7bb40026792bd3\"" Jul 6 23:56:26.367513 kubelet[3188]: I0706 23:56:26.367168 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e81e8c68-bdb3-405b-b5d5-ef1f356cc0e9-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-prb8q\" (UID: \"e81e8c68-bdb3-405b-b5d5-ef1f356cc0e9\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-prb8q" Jul 6 23:56:26.367513 kubelet[3188]: I0706 23:56:26.367225 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h6tq\" (UniqueName: \"kubernetes.io/projected/e81e8c68-bdb3-405b-b5d5-ef1f356cc0e9-kube-api-access-7h6tq\") pod \"tigera-operator-5bf8dfcb4-prb8q\" (UID: \"e81e8c68-bdb3-405b-b5d5-ef1f356cc0e9\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-prb8q" Jul 6 23:56:26.375672 systemd[1]: Started cri-containerd-00d36a7c301f83259fba7289e54b9f201f17404add0faa454c7bb40026792bd3.scope - libcontainer container 00d36a7c301f83259fba7289e54b9f201f17404add0faa454c7bb40026792bd3. Jul 6 23:56:26.418251 containerd[1734]: time="2025-07-06T23:56:26.417451540Z" level=info msg="StartContainer for \"00d36a7c301f83259fba7289e54b9f201f17404add0faa454c7bb40026792bd3\" returns successfully" Jul 6 23:56:26.571724 containerd[1734]: time="2025-07-06T23:56:26.571020388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-prb8q,Uid:e81e8c68-bdb3-405b-b5d5-ef1f356cc0e9,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:56:26.629864 containerd[1734]: time="2025-07-06T23:56:26.629766734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:26.629864 containerd[1734]: time="2025-07-06T23:56:26.629821734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:26.629864 containerd[1734]: time="2025-07-06T23:56:26.629840534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:26.630735 containerd[1734]: time="2025-07-06T23:56:26.630514843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:26.657671 systemd[1]: Started cri-containerd-4b2529cee828204e45d05f681e86e4e4ad329fc58b12d9dbeffffc9f685d1c0e.scope - libcontainer container 4b2529cee828204e45d05f681e86e4e4ad329fc58b12d9dbeffffc9f685d1c0e. Jul 6 23:56:26.706390 containerd[1734]: time="2025-07-06T23:56:26.706259504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-prb8q,Uid:e81e8c68-bdb3-405b-b5d5-ef1f356cc0e9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4b2529cee828204e45d05f681e86e4e4ad329fc58b12d9dbeffffc9f685d1c0e\"" Jul 6 23:56:26.709736 containerd[1734]: time="2025-07-06T23:56:26.709579346Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:56:28.285971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2844371504.mount: Deactivated successfully. Jul 6 23:56:29.054778 containerd[1734]: time="2025-07-06T23:56:29.054721399Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:29.063010 containerd[1734]: time="2025-07-06T23:56:29.062927103Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 6 23:56:29.066503 containerd[1734]: time="2025-07-06T23:56:29.064755826Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:29.070766 containerd[1734]: time="2025-07-06T23:56:29.070725502Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:29.071508 containerd[1734]: time="2025-07-06T23:56:29.071452111Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.361811964s" Jul 6 23:56:29.071657 containerd[1734]: time="2025-07-06T23:56:29.071515112Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 6 23:56:29.075123 containerd[1734]: time="2025-07-06T23:56:29.075085757Z" level=info msg="CreateContainer within sandbox \"4b2529cee828204e45d05f681e86e4e4ad329fc58b12d9dbeffffc9f685d1c0e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:56:29.108684 containerd[1734]: time="2025-07-06T23:56:29.108637983Z" level=info msg="CreateContainer within sandbox \"4b2529cee828204e45d05f681e86e4e4ad329fc58b12d9dbeffffc9f685d1c0e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"94d4ae03d6d53ae5762dd9bbbe875105776ed704421419345a5953145b3cf2fc\"" Jul 6 23:56:29.109615 containerd[1734]: time="2025-07-06T23:56:29.109572795Z" level=info msg="StartContainer for \"94d4ae03d6d53ae5762dd9bbbe875105776ed704421419345a5953145b3cf2fc\"" Jul 6 23:56:29.139630 systemd[1]: Started cri-containerd-94d4ae03d6d53ae5762dd9bbbe875105776ed704421419345a5953145b3cf2fc.scope - libcontainer container 94d4ae03d6d53ae5762dd9bbbe875105776ed704421419345a5953145b3cf2fc. Jul 6 23:56:29.170884 containerd[1734]: time="2025-07-06T23:56:29.170835272Z" level=info msg="StartContainer for \"94d4ae03d6d53ae5762dd9bbbe875105776ed704421419345a5953145b3cf2fc\" returns successfully" Jul 6 23:56:30.110026 kubelet[3188]: I0706 23:56:30.109963 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gm59h" podStartSLOduration=5.109944587 podStartE2EDuration="5.109944587s" podCreationTimestamp="2025-07-06 23:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:27.121765076 +0000 UTC m=+7.191360637" watchObservedRunningTime="2025-07-06 23:56:30.109944587 +0000 UTC m=+10.179540048" Jul 6 23:56:33.792253 kubelet[3188]: I0706 23:56:33.790904 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-prb8q" podStartSLOduration=5.426155451 podStartE2EDuration="7.790879952s" podCreationTimestamp="2025-07-06 23:56:26 +0000 UTC" firstStartedPulling="2025-07-06 23:56:26.708000926 +0000 UTC m=+6.777596387" lastFinishedPulling="2025-07-06 23:56:29.072725427 +0000 UTC m=+9.142320888" observedRunningTime="2025-07-06 23:56:30.11023149 +0000 UTC m=+10.179826951" watchObservedRunningTime="2025-07-06 23:56:33.790879952 +0000 UTC m=+13.860475413" Jul 6 23:56:35.864625 sudo[2236]: pam_unix(sudo:session): session closed for user root Jul 6 23:56:35.968723 sshd[2233]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:35.972635 systemd[1]: sshd@6-10.200.8.46:22-10.200.16.10:37854.service: Deactivated successfully. Jul 6 23:56:35.981877 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:56:35.982165 systemd[1]: session-9.scope: Consumed 4.951s CPU time, 155.3M memory peak, 0B memory swap peak. Jul 6 23:56:35.988964 systemd-logind[1705]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:56:35.992379 systemd-logind[1705]: Removed session 9. Jul 6 23:56:40.276365 kubelet[3188]: I0706 23:56:40.275399 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bef35b8-2b6f-4e26-a132-51dc5c5471b9-tigera-ca-bundle\") pod \"calico-typha-75d5c8b566-kdxt8\" (UID: \"5bef35b8-2b6f-4e26-a132-51dc5c5471b9\") " pod="calico-system/calico-typha-75d5c8b566-kdxt8" Jul 6 23:56:40.276365 kubelet[3188]: I0706 23:56:40.275446 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5bef35b8-2b6f-4e26-a132-51dc5c5471b9-typha-certs\") pod \"calico-typha-75d5c8b566-kdxt8\" (UID: \"5bef35b8-2b6f-4e26-a132-51dc5c5471b9\") " pod="calico-system/calico-typha-75d5c8b566-kdxt8" Jul 6 23:56:40.276365 kubelet[3188]: I0706 23:56:40.275493 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6xs2\" (UniqueName: \"kubernetes.io/projected/5bef35b8-2b6f-4e26-a132-51dc5c5471b9-kube-api-access-r6xs2\") pod \"calico-typha-75d5c8b566-kdxt8\" (UID: \"5bef35b8-2b6f-4e26-a132-51dc5c5471b9\") " pod="calico-system/calico-typha-75d5c8b566-kdxt8" Jul 6 23:56:40.279030 systemd[1]: Created slice kubepods-besteffort-pod5bef35b8_2b6f_4e26_a132_51dc5c5471b9.slice - libcontainer container kubepods-besteffort-pod5bef35b8_2b6f_4e26_a132_51dc5c5471b9.slice. Jul 6 23:56:40.578996 kubelet[3188]: I0706 23:56:40.578953 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a3a4e221-e83c-4dbf-a3bb-313f4eea5d11-flexvol-driver-host\") pod \"calico-node-qr5bm\" (UID: \"a3a4e221-e83c-4dbf-a3bb-313f4eea5d11\") " pod="calico-system/calico-node-qr5bm" Jul 6 23:56:40.578996 kubelet[3188]: I0706 23:56:40.578994 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3a4e221-e83c-4dbf-a3bb-313f4eea5d11-lib-modules\") pod \"calico-node-qr5bm\" (UID: \"a3a4e221-e83c-4dbf-a3bb-313f4eea5d11\") " pod="calico-system/calico-node-qr5bm" Jul 6 23:56:40.579261 kubelet[3188]: I0706 23:56:40.579019 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a3a4e221-e83c-4dbf-a3bb-313f4eea5d11-cni-bin-dir\") pod \"calico-node-qr5bm\" (UID: \"a3a4e221-e83c-4dbf-a3bb-313f4eea5d11\") " pod="calico-system/calico-node-qr5bm" Jul 6 23:56:40.579261 kubelet[3188]: I0706 23:56:40.579039 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3a4e221-e83c-4dbf-a3bb-313f4eea5d11-xtables-lock\") pod \"calico-node-qr5bm\" (UID: \"a3a4e221-e83c-4dbf-a3bb-313f4eea5d11\") " pod="calico-system/calico-node-qr5bm" Jul 6 23:56:40.579261 kubelet[3188]: I0706 23:56:40.579063 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a3a4e221-e83c-4dbf-a3bb-313f4eea5d11-cni-log-dir\") pod \"calico-node-qr5bm\" (UID: \"a3a4e221-e83c-4dbf-a3bb-313f4eea5d11\") " pod="calico-system/calico-node-qr5bm" Jul 6 23:56:40.579261 kubelet[3188]: I0706 23:56:40.579085 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3a4e221-e83c-4dbf-a3bb-313f4eea5d11-tigera-ca-bundle\") pod \"calico-node-qr5bm\" (UID: \"a3a4e221-e83c-4dbf-a3bb-313f4eea5d11\") " pod="calico-system/calico-node-qr5bm" Jul 6 23:56:40.579261 kubelet[3188]: I0706 23:56:40.579123 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a3a4e221-e83c-4dbf-a3bb-313f4eea5d11-node-certs\") pod \"calico-node-qr5bm\" (UID: \"a3a4e221-e83c-4dbf-a3bb-313f4eea5d11\") " pod="calico-system/calico-node-qr5bm" Jul 6 23:56:40.579506 kubelet[3188]: I0706 23:56:40.579144 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3a4e221-e83c-4dbf-a3bb-313f4eea5d11-var-lib-calico\") pod \"calico-node-qr5bm\" (UID: \"a3a4e221-e83c-4dbf-a3bb-313f4eea5d11\") " pod="calico-system/calico-node-qr5bm" Jul 6 23:56:40.579506 kubelet[3188]: I0706 23:56:40.579166 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a3a4e221-e83c-4dbf-a3bb-313f4eea5d11-cni-net-dir\") pod \"calico-node-qr5bm\" (UID: \"a3a4e221-e83c-4dbf-a3bb-313f4eea5d11\") " pod="calico-system/calico-node-qr5bm" Jul 6 23:56:40.579506 kubelet[3188]: I0706 23:56:40.579184 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a3a4e221-e83c-4dbf-a3bb-313f4eea5d11-policysync\") pod \"calico-node-qr5bm\" (UID: \"a3a4e221-e83c-4dbf-a3bb-313f4eea5d11\") " pod="calico-system/calico-node-qr5bm" Jul 6 23:56:40.579506 kubelet[3188]: I0706 23:56:40.579221 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a3a4e221-e83c-4dbf-a3bb-313f4eea5d11-var-run-calico\") pod \"calico-node-qr5bm\" (UID: \"a3a4e221-e83c-4dbf-a3bb-313f4eea5d11\") " pod="calico-system/calico-node-qr5bm" Jul 6 23:56:40.579506 kubelet[3188]: I0706 23:56:40.579245 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mglh4\" (UniqueName: \"kubernetes.io/projected/a3a4e221-e83c-4dbf-a3bb-313f4eea5d11-kube-api-access-mglh4\") pod \"calico-node-qr5bm\" (UID: \"a3a4e221-e83c-4dbf-a3bb-313f4eea5d11\") " pod="calico-system/calico-node-qr5bm" Jul 6 23:56:40.581129 systemd[1]: Created slice kubepods-besteffort-poda3a4e221_e83c_4dbf_a3bb_313f4eea5d11.slice - libcontainer container kubepods-besteffort-poda3a4e221_e83c_4dbf_a3bb_313f4eea5d11.slice. Jul 6 23:56:40.585080 containerd[1734]: time="2025-07-06T23:56:40.585035570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75d5c8b566-kdxt8,Uid:5bef35b8-2b6f-4e26-a132-51dc5c5471b9,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:40.677358 containerd[1734]: time="2025-07-06T23:56:40.677160301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:40.677830 containerd[1734]: time="2025-07-06T23:56:40.677335103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:40.677830 containerd[1734]: time="2025-07-06T23:56:40.677356303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:40.677830 containerd[1734]: time="2025-07-06T23:56:40.677622607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:40.684359 kubelet[3188]: E0706 23:56:40.683448 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.684359 kubelet[3188]: W0706 23:56:40.683603 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.684359 kubelet[3188]: E0706 23:56:40.683636 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.685307 kubelet[3188]: E0706 23:56:40.685283 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.685307 kubelet[3188]: W0706 23:56:40.685305 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.686682 kubelet[3188]: E0706 23:56:40.686651 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.687677 kubelet[3188]: E0706 23:56:40.687652 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.687761 kubelet[3188]: W0706 23:56:40.687678 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.688819 kubelet[3188]: E0706 23:56:40.688791 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.690666 kubelet[3188]: E0706 23:56:40.689244 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.690666 kubelet[3188]: W0706 23:56:40.690524 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.690666 kubelet[3188]: E0706 23:56:40.690589 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.690905 kubelet[3188]: E0706 23:56:40.690886 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.690974 kubelet[3188]: W0706 23:56:40.690906 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.690974 kubelet[3188]: E0706 23:56:40.690962 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.692541 kubelet[3188]: E0706 23:56:40.692516 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.692541 kubelet[3188]: W0706 23:56:40.692540 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.695243 kubelet[3188]: E0706 23:56:40.695222 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.695521 kubelet[3188]: E0706 23:56:40.695380 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.695521 kubelet[3188]: W0706 23:56:40.695399 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.695642 kubelet[3188]: E0706 23:56:40.695530 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.695846 kubelet[3188]: E0706 23:56:40.695830 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.696002 kubelet[3188]: W0706 23:56:40.695927 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.696129 kubelet[3188]: E0706 23:56:40.696089 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.697503 kubelet[3188]: E0706 23:56:40.696607 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.697503 kubelet[3188]: W0706 23:56:40.696624 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.697882 kubelet[3188]: E0706 23:56:40.697792 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.699349 kubelet[3188]: E0706 23:56:40.699265 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.699349 kubelet[3188]: W0706 23:56:40.699282 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.699777 kubelet[3188]: E0706 23:56:40.699693 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.700033 kubelet[3188]: E0706 23:56:40.699995 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.700033 kubelet[3188]: W0706 23:56:40.700010 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.702744 kubelet[3188]: E0706 23:56:40.702727 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.710208 kubelet[3188]: E0706 23:56:40.704684 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.710208 kubelet[3188]: W0706 23:56:40.709558 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.710208 kubelet[3188]: E0706 23:56:40.709602 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.710208 kubelet[3188]: E0706 23:56:40.710069 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.710208 kubelet[3188]: W0706 23:56:40.710083 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.710208 kubelet[3188]: E0706 23:56:40.710132 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.717080 kubelet[3188]: E0706 23:56:40.712322 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.717080 kubelet[3188]: W0706 23:56:40.716952 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.717436 kubelet[3188]: E0706 23:56:40.717408 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.717820 kubelet[3188]: E0706 23:56:40.717800 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.719773 kubelet[3188]: W0706 23:56:40.719751 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.722708 kubelet[3188]: E0706 23:56:40.722572 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.723509 kubelet[3188]: E0706 23:56:40.723489 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.723742 kubelet[3188]: W0706 23:56:40.723606 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.723742 kubelet[3188]: E0706 23:56:40.723648 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.723921 kubelet[3188]: E0706 23:56:40.723903 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.723973 kubelet[3188]: W0706 23:56:40.723922 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.724557 kubelet[3188]: E0706 23:56:40.724516 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.724847 kubelet[3188]: E0706 23:56:40.724822 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.724847 kubelet[3188]: W0706 23:56:40.724844 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.724959 kubelet[3188]: E0706 23:56:40.724904 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.726907 kubelet[3188]: E0706 23:56:40.725939 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.726907 kubelet[3188]: W0706 23:56:40.725972 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.726907 kubelet[3188]: E0706 23:56:40.726188 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.729028 kubelet[3188]: E0706 23:56:40.728407 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.729028 kubelet[3188]: W0706 23:56:40.728427 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.729028 kubelet[3188]: E0706 23:56:40.728897 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.733092 kubelet[3188]: E0706 23:56:40.733067 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.733092 kubelet[3188]: W0706 23:56:40.733090 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.733500 kubelet[3188]: E0706 23:56:40.733186 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.737004 kubelet[3188]: E0706 23:56:40.736850 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.737004 kubelet[3188]: W0706 23:56:40.736870 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.737004 kubelet[3188]: E0706 23:56:40.736887 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.737904 kubelet[3188]: E0706 23:56:40.737886 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.738668 kubelet[3188]: W0706 23:56:40.738648 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.738766 kubelet[3188]: E0706 23:56:40.738753 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.748492 kubelet[3188]: E0706 23:56:40.748057 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.748422 systemd[1]: Started cri-containerd-53f05023471d8cf62979e2ad1efd41a74ba15a470d1d711201b5ab02e486caa3.scope - libcontainer container 53f05023471d8cf62979e2ad1efd41a74ba15a470d1d711201b5ab02e486caa3. Jul 6 23:56:40.750053 kubelet[3188]: W0706 23:56:40.749893 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.750053 kubelet[3188]: E0706 23:56:40.750004 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.818494 containerd[1734]: time="2025-07-06T23:56:40.817261020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75d5c8b566-kdxt8,Uid:5bef35b8-2b6f-4e26-a132-51dc5c5471b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"53f05023471d8cf62979e2ad1efd41a74ba15a470d1d711201b5ab02e486caa3\"" Jul 6 23:56:40.822742 containerd[1734]: time="2025-07-06T23:56:40.822695287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:56:40.836251 kubelet[3188]: E0706 23:56:40.834352 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vfbs" podUID="17855cdd-9fca-46b9-9af2-cad254d32cd1" Jul 6 23:56:40.878621 kubelet[3188]: E0706 23:56:40.878569 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.878621 kubelet[3188]: W0706 23:56:40.878608 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.878621 kubelet[3188]: E0706 23:56:40.878635 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.879218 kubelet[3188]: E0706 23:56:40.879189 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.879541 kubelet[3188]: W0706 23:56:40.879506 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.879541 kubelet[3188]: E0706 23:56:40.879546 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.879928 kubelet[3188]: E0706 23:56:40.879907 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.881521 kubelet[3188]: W0706 23:56:40.879926 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.881609 kubelet[3188]: E0706 23:56:40.881530 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.881967 kubelet[3188]: E0706 23:56:40.881945 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.882055 kubelet[3188]: W0706 23:56:40.881965 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.882055 kubelet[3188]: E0706 23:56:40.881997 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.882523 kubelet[3188]: E0706 23:56:40.882368 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.882523 kubelet[3188]: W0706 23:56:40.882384 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.882523 kubelet[3188]: E0706 23:56:40.882414 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.883046 kubelet[3188]: E0706 23:56:40.882753 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.883046 kubelet[3188]: W0706 23:56:40.882768 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.883046 kubelet[3188]: E0706 23:56:40.882795 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.883046 kubelet[3188]: E0706 23:56:40.883039 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.883046 kubelet[3188]: W0706 23:56:40.883051 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.883577 kubelet[3188]: E0706 23:56:40.883065 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.883577 kubelet[3188]: E0706 23:56:40.883561 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.883577 kubelet[3188]: W0706 23:56:40.883573 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.883708 kubelet[3188]: E0706 23:56:40.883588 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.884781 kubelet[3188]: E0706 23:56:40.883882 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.884781 kubelet[3188]: W0706 23:56:40.883897 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.884781 kubelet[3188]: E0706 23:56:40.883915 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.884781 kubelet[3188]: E0706 23:56:40.884135 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.884781 kubelet[3188]: W0706 23:56:40.884169 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.884781 kubelet[3188]: E0706 23:56:40.884184 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.884781 kubelet[3188]: E0706 23:56:40.884398 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.884781 kubelet[3188]: W0706 23:56:40.884410 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.884781 kubelet[3188]: E0706 23:56:40.884423 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.884781 kubelet[3188]: E0706 23:56:40.884706 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.885879 kubelet[3188]: W0706 23:56:40.884719 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.885879 kubelet[3188]: E0706 23:56:40.884733 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.885879 kubelet[3188]: E0706 23:56:40.885093 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.885879 kubelet[3188]: W0706 23:56:40.885106 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.885879 kubelet[3188]: E0706 23:56:40.885121 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.885879 kubelet[3188]: E0706 23:56:40.885414 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.885879 kubelet[3188]: W0706 23:56:40.885427 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.885879 kubelet[3188]: E0706 23:56:40.885441 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.885879 kubelet[3188]: E0706 23:56:40.885764 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.885879 kubelet[3188]: W0706 23:56:40.885777 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.886825 kubelet[3188]: E0706 23:56:40.885790 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.886825 kubelet[3188]: E0706 23:56:40.886107 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.886825 kubelet[3188]: W0706 23:56:40.886119 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.886825 kubelet[3188]: E0706 23:56:40.886132 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.886825 kubelet[3188]: E0706 23:56:40.886608 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.886825 kubelet[3188]: W0706 23:56:40.886622 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.886825 kubelet[3188]: E0706 23:56:40.886636 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.887115 kubelet[3188]: E0706 23:56:40.887035 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.887115 kubelet[3188]: W0706 23:56:40.887048 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.887115 kubelet[3188]: E0706 23:56:40.887087 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.888483 kubelet[3188]: E0706 23:56:40.887425 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.888483 kubelet[3188]: W0706 23:56:40.887441 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.888483 kubelet[3188]: E0706 23:56:40.887455 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.888483 kubelet[3188]: E0706 23:56:40.888065 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.888483 kubelet[3188]: W0706 23:56:40.888079 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.888483 kubelet[3188]: E0706 23:56:40.888094 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.889288 kubelet[3188]: E0706 23:56:40.889271 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.889288 kubelet[3188]: W0706 23:56:40.889287 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.889443 kubelet[3188]: E0706 23:56:40.889301 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.889443 kubelet[3188]: I0706 23:56:40.889335 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17855cdd-9fca-46b9-9af2-cad254d32cd1-kubelet-dir\") pod \"csi-node-driver-9vfbs\" (UID: \"17855cdd-9fca-46b9-9af2-cad254d32cd1\") " pod="calico-system/csi-node-driver-9vfbs" Jul 6 23:56:40.890115 containerd[1734]: time="2025-07-06T23:56:40.890074314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qr5bm,Uid:a3a4e221-e83c-4dbf-a3bb-313f4eea5d11,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:40.891137 kubelet[3188]: E0706 23:56:40.891081 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.891137 kubelet[3188]: W0706 23:56:40.891100 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.891137 kubelet[3188]: E0706 23:56:40.891116 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.891315 kubelet[3188]: I0706 23:56:40.891145 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/17855cdd-9fca-46b9-9af2-cad254d32cd1-registration-dir\") pod \"csi-node-driver-9vfbs\" (UID: \"17855cdd-9fca-46b9-9af2-cad254d32cd1\") " pod="calico-system/csi-node-driver-9vfbs" Jul 6 23:56:40.892348 kubelet[3188]: E0706 23:56:40.891453 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.892348 kubelet[3188]: W0706 23:56:40.891488 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.892348 kubelet[3188]: E0706 23:56:40.891514 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.892348 kubelet[3188]: I0706 23:56:40.891539 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/17855cdd-9fca-46b9-9af2-cad254d32cd1-socket-dir\") pod \"csi-node-driver-9vfbs\" (UID: \"17855cdd-9fca-46b9-9af2-cad254d32cd1\") " pod="calico-system/csi-node-driver-9vfbs" Jul 6 23:56:40.892348 kubelet[3188]: E0706 23:56:40.891939 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.892348 kubelet[3188]: W0706 23:56:40.891952 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.892348 kubelet[3188]: E0706 23:56:40.891982 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.892722 kubelet[3188]: E0706 23:56:40.892426 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.892722 kubelet[3188]: W0706 23:56:40.892449 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.892722 kubelet[3188]: E0706 23:56:40.892512 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.893339 kubelet[3188]: E0706 23:56:40.893038 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.893339 kubelet[3188]: W0706 23:56:40.893057 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.893339 kubelet[3188]: E0706 23:56:40.893189 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.893765 kubelet[3188]: E0706 23:56:40.893668 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.893765 kubelet[3188]: W0706 23:56:40.893682 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.893944 kubelet[3188]: E0706 23:56:40.893930 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.894680 kubelet[3188]: E0706 23:56:40.894657 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.894680 kubelet[3188]: W0706 23:56:40.894675 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.894989 kubelet[3188]: E0706 23:56:40.894964 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.895062 kubelet[3188]: I0706 23:56:40.894997 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/17855cdd-9fca-46b9-9af2-cad254d32cd1-varrun\") pod \"csi-node-driver-9vfbs\" (UID: \"17855cdd-9fca-46b9-9af2-cad254d32cd1\") " pod="calico-system/csi-node-driver-9vfbs" Jul 6 23:56:40.895114 kubelet[3188]: E0706 23:56:40.895104 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.895158 kubelet[3188]: W0706 23:56:40.895115 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.895158 kubelet[3188]: E0706 23:56:40.895139 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.896456 kubelet[3188]: E0706 23:56:40.896359 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.896456 kubelet[3188]: W0706 23:56:40.896375 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.896456 kubelet[3188]: E0706 23:56:40.896404 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.897447 kubelet[3188]: E0706 23:56:40.897211 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.897447 kubelet[3188]: W0706 23:56:40.897241 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.897447 kubelet[3188]: E0706 23:56:40.897269 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.897447 kubelet[3188]: I0706 23:56:40.897294 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p5gp\" (UniqueName: \"kubernetes.io/projected/17855cdd-9fca-46b9-9af2-cad254d32cd1-kube-api-access-4p5gp\") pod \"csi-node-driver-9vfbs\" (UID: \"17855cdd-9fca-46b9-9af2-cad254d32cd1\") " pod="calico-system/csi-node-driver-9vfbs" Jul 6 23:56:40.897947 kubelet[3188]: E0706 23:56:40.897655 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.897947 kubelet[3188]: W0706 23:56:40.897669 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.897947 kubelet[3188]: E0706 23:56:40.897688 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.899088 kubelet[3188]: E0706 23:56:40.898630 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.899088 kubelet[3188]: W0706 23:56:40.898646 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.899088 kubelet[3188]: E0706 23:56:40.898664 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.899326 kubelet[3188]: E0706 23:56:40.899172 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.899326 kubelet[3188]: W0706 23:56:40.899185 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.899326 kubelet[3188]: E0706 23:56:40.899204 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.899583 kubelet[3188]: E0706 23:56:40.899563 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.899659 kubelet[3188]: W0706 23:56:40.899582 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.899659 kubelet[3188]: E0706 23:56:40.899644 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:40.953181 containerd[1734]: time="2025-07-06T23:56:40.952907585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:40.953181 containerd[1734]: time="2025-07-06T23:56:40.952980886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:40.953181 containerd[1734]: time="2025-07-06T23:56:40.952996286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:40.953181 containerd[1734]: time="2025-07-06T23:56:40.953079787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:40.986695 systemd[1]: Started cri-containerd-c363621f03db73fb6b97cf583ea7fab31e64e2c40ae6efebe794ce546bb9ba51.scope - libcontainer container c363621f03db73fb6b97cf583ea7fab31e64e2c40ae6efebe794ce546bb9ba51. Jul 6 23:56:40.998897 kubelet[3188]: E0706 23:56:40.998650 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:40.998897 kubelet[3188]: W0706 23:56:40.998693 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:40.999576 kubelet[3188]: E0706 23:56:40.999134 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.000141 kubelet[3188]: E0706 23:56:41.000110 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.000415 kubelet[3188]: W0706 23:56:41.000269 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.000415 kubelet[3188]: E0706 23:56:41.000308 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.000935 kubelet[3188]: E0706 23:56:41.000839 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.000935 kubelet[3188]: W0706 23:56:41.000856 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.001127 kubelet[3188]: E0706 23:56:41.000980 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.001497 kubelet[3188]: E0706 23:56:41.001329 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.001497 kubelet[3188]: W0706 23:56:41.001373 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.001497 kubelet[3188]: E0706 23:56:41.001447 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.001999 kubelet[3188]: E0706 23:56:41.001860 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.001999 kubelet[3188]: W0706 23:56:41.001874 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.001999 kubelet[3188]: E0706 23:56:41.001902 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.002516 kubelet[3188]: E0706 23:56:41.002418 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.002516 kubelet[3188]: W0706 23:56:41.002433 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.002516 kubelet[3188]: E0706 23:56:41.002541 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.003618 kubelet[3188]: E0706 23:56:41.003459 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.003618 kubelet[3188]: W0706 23:56:41.003501 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.004627 kubelet[3188]: E0706 23:56:41.004481 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.005440 kubelet[3188]: E0706 23:56:41.004890 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.005440 kubelet[3188]: W0706 23:56:41.004906 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.005440 kubelet[3188]: E0706 23:56:41.005408 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.006610 kubelet[3188]: E0706 23:56:41.005944 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.006610 kubelet[3188]: W0706 23:56:41.005957 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.006610 kubelet[3188]: E0706 23:56:41.005985 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.007091 kubelet[3188]: E0706 23:56:41.006963 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.007091 kubelet[3188]: W0706 23:56:41.006979 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.007932 kubelet[3188]: E0706 23:56:41.007297 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.008194 kubelet[3188]: E0706 23:56:41.008055 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.008194 kubelet[3188]: W0706 23:56:41.008084 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.009607 kubelet[3188]: E0706 23:56:41.009544 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.009807 kubelet[3188]: E0706 23:56:41.009738 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.009807 kubelet[3188]: W0706 23:56:41.009754 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.010105 kubelet[3188]: E0706 23:56:41.010023 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.010448 kubelet[3188]: E0706 23:56:41.010381 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.010448 kubelet[3188]: W0706 23:56:41.010397 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.010884 kubelet[3188]: E0706 23:56:41.010671 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.011755 kubelet[3188]: E0706 23:56:41.011656 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.011755 kubelet[3188]: W0706 23:56:41.011687 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.012361 kubelet[3188]: E0706 23:56:41.012037 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.013340 kubelet[3188]: E0706 23:56:41.013324 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.014293 kubelet[3188]: W0706 23:56:41.014177 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.016105 kubelet[3188]: E0706 23:56:41.015376 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.016105 kubelet[3188]: E0706 23:56:41.015544 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.016105 kubelet[3188]: W0706 23:56:41.015562 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.016105 kubelet[3188]: E0706 23:56:41.016074 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.017896 kubelet[3188]: E0706 23:56:41.017523 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.017896 kubelet[3188]: W0706 23:56:41.017539 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.017896 kubelet[3188]: E0706 23:56:41.017651 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.018712 kubelet[3188]: E0706 23:56:41.018304 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.018837 kubelet[3188]: W0706 23:56:41.018818 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.019311 kubelet[3188]: E0706 23:56:41.019145 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.022694 kubelet[3188]: E0706 23:56:41.022554 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.022694 kubelet[3188]: W0706 23:56:41.022572 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.023971 kubelet[3188]: E0706 23:56:41.022862 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.024811 kubelet[3188]: E0706 23:56:41.024234 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.024811 kubelet[3188]: W0706 23:56:41.024260 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.025124 kubelet[3188]: E0706 23:56:41.025003 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.025553 kubelet[3188]: E0706 23:56:41.025247 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.025553 kubelet[3188]: W0706 23:56:41.025261 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.027216 kubelet[3188]: E0706 23:56:41.025679 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.027390 kubelet[3188]: E0706 23:56:41.027376 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.027715 kubelet[3188]: W0706 23:56:41.027488 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.027715 kubelet[3188]: E0706 23:56:41.027633 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.029574 kubelet[3188]: E0706 23:56:41.029557 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.029771 kubelet[3188]: W0706 23:56:41.029651 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.029957 kubelet[3188]: E0706 23:56:41.029945 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.031221 kubelet[3188]: W0706 23:56:41.030776 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.031221 kubelet[3188]: E0706 23:56:41.030802 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.031221 kubelet[3188]: E0706 23:56:41.031058 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.033916 kubelet[3188]: E0706 23:56:41.033899 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.034424 kubelet[3188]: W0706 23:56:41.034407 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.035356 kubelet[3188]: E0706 23:56:41.035334 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.074416 kubelet[3188]: E0706 23:56:41.074377 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:41.074416 kubelet[3188]: W0706 23:56:41.074409 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:41.074633 kubelet[3188]: E0706 23:56:41.074440 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:41.203069 containerd[1734]: time="2025-07-06T23:56:41.202509448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qr5bm,Uid:a3a4e221-e83c-4dbf-a3bb-313f4eea5d11,Namespace:calico-system,Attempt:0,} returns sandbox id \"c363621f03db73fb6b97cf583ea7fab31e64e2c40ae6efebe794ce546bb9ba51\"" Jul 6 23:56:42.038323 kubelet[3188]: E0706 23:56:42.038210 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vfbs" podUID="17855cdd-9fca-46b9-9af2-cad254d32cd1" Jul 6 23:56:42.111451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3525975817.mount: Deactivated successfully. Jul 6 23:56:43.285478 containerd[1734]: time="2025-07-06T23:56:43.285327510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:43.288316 containerd[1734]: time="2025-07-06T23:56:43.288226446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 6 23:56:43.294495 containerd[1734]: time="2025-07-06T23:56:43.294366521Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:43.299104 containerd[1734]: time="2025-07-06T23:56:43.298259469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:43.299104 containerd[1734]: time="2025-07-06T23:56:43.298938477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.47619139s" Jul 6 23:56:43.299104 containerd[1734]: time="2025-07-06T23:56:43.298980778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 6 23:56:43.300441 containerd[1734]: time="2025-07-06T23:56:43.300413895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:56:43.327798 containerd[1734]: time="2025-07-06T23:56:43.327743131Z" level=info msg="CreateContainer within sandbox \"53f05023471d8cf62979e2ad1efd41a74ba15a470d1d711201b5ab02e486caa3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:56:43.373203 containerd[1734]: time="2025-07-06T23:56:43.373150388Z" level=info msg="CreateContainer within sandbox \"53f05023471d8cf62979e2ad1efd41a74ba15a470d1d711201b5ab02e486caa3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"767879674509bf054e229e0beac3d1ade682eedc637b26bd64ae9cfe8a95631a\"" Jul 6 23:56:43.375001 containerd[1734]: time="2025-07-06T23:56:43.374410104Z" level=info msg="StartContainer for \"767879674509bf054e229e0beac3d1ade682eedc637b26bd64ae9cfe8a95631a\"" Jul 6 23:56:43.420715 systemd[1]: Started cri-containerd-767879674509bf054e229e0beac3d1ade682eedc637b26bd64ae9cfe8a95631a.scope - libcontainer container 767879674509bf054e229e0beac3d1ade682eedc637b26bd64ae9cfe8a95631a. Jul 6 23:56:43.482859 containerd[1734]: time="2025-07-06T23:56:43.482562131Z" level=info msg="StartContainer for \"767879674509bf054e229e0beac3d1ade682eedc637b26bd64ae9cfe8a95631a\" returns successfully" Jul 6 23:56:44.040760 kubelet[3188]: E0706 23:56:44.038745 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vfbs" podUID="17855cdd-9fca-46b9-9af2-cad254d32cd1" Jul 6 23:56:44.215640 kubelet[3188]: E0706 23:56:44.215584 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.215908 kubelet[3188]: W0706 23:56:44.215880 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.216017 kubelet[3188]: E0706 23:56:44.215980 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.216484 kubelet[3188]: E0706 23:56:44.216438 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.216484 kubelet[3188]: W0706 23:56:44.216483 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.216605 kubelet[3188]: E0706 23:56:44.216504 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.216954 kubelet[3188]: E0706 23:56:44.216756 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.216954 kubelet[3188]: W0706 23:56:44.216771 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.216954 kubelet[3188]: E0706 23:56:44.216786 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.217673 kubelet[3188]: E0706 23:56:44.217001 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.217673 kubelet[3188]: W0706 23:56:44.217012 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.217673 kubelet[3188]: E0706 23:56:44.217025 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.217673 kubelet[3188]: E0706 23:56:44.217260 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.217673 kubelet[3188]: W0706 23:56:44.217272 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.217673 kubelet[3188]: E0706 23:56:44.217285 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.217673 kubelet[3188]: E0706 23:56:44.217513 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.217673 kubelet[3188]: W0706 23:56:44.217526 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.217673 kubelet[3188]: E0706 23:56:44.217539 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.218946 kubelet[3188]: E0706 23:56:44.218821 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.218946 kubelet[3188]: W0706 23:56:44.218839 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.218946 kubelet[3188]: E0706 23:56:44.218854 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.219625 kubelet[3188]: E0706 23:56:44.219070 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.219625 kubelet[3188]: W0706 23:56:44.219081 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.219625 kubelet[3188]: E0706 23:56:44.219093 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.219625 kubelet[3188]: E0706 23:56:44.219328 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.219625 kubelet[3188]: W0706 23:56:44.219340 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.219625 kubelet[3188]: E0706 23:56:44.219353 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.219625 kubelet[3188]: E0706 23:56:44.219585 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.219625 kubelet[3188]: W0706 23:56:44.219596 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.219625 kubelet[3188]: E0706 23:56:44.219608 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.221487 kubelet[3188]: E0706 23:56:44.219793 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.221487 kubelet[3188]: W0706 23:56:44.219804 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.221487 kubelet[3188]: E0706 23:56:44.219816 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.221487 kubelet[3188]: E0706 23:56:44.220017 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.221487 kubelet[3188]: W0706 23:56:44.220029 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.221487 kubelet[3188]: E0706 23:56:44.220041 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.221487 kubelet[3188]: E0706 23:56:44.220336 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.221487 kubelet[3188]: W0706 23:56:44.220349 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.221487 kubelet[3188]: E0706 23:56:44.220362 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.221487 kubelet[3188]: E0706 23:56:44.220591 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.221827 kubelet[3188]: W0706 23:56:44.220603 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.221827 kubelet[3188]: E0706 23:56:44.220616 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.221827 kubelet[3188]: E0706 23:56:44.220821 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.221827 kubelet[3188]: W0706 23:56:44.220833 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.221827 kubelet[3188]: E0706 23:56:44.220846 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.233640 kubelet[3188]: E0706 23:56:44.233605 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.233640 kubelet[3188]: W0706 23:56:44.233630 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.233885 kubelet[3188]: E0706 23:56:44.233655 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.234032 kubelet[3188]: E0706 23:56:44.234004 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.234032 kubelet[3188]: W0706 23:56:44.234021 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.234178 kubelet[3188]: E0706 23:56:44.234043 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.234372 kubelet[3188]: E0706 23:56:44.234351 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.234372 kubelet[3188]: W0706 23:56:44.234367 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.234513 kubelet[3188]: E0706 23:56:44.234394 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.234711 kubelet[3188]: E0706 23:56:44.234693 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.234711 kubelet[3188]: W0706 23:56:44.234708 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.234952 kubelet[3188]: E0706 23:56:44.234731 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.235024 kubelet[3188]: E0706 23:56:44.234984 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.235024 kubelet[3188]: W0706 23:56:44.234996 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.235142 kubelet[3188]: E0706 23:56:44.235078 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.235269 kubelet[3188]: E0706 23:56:44.235239 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.235269 kubelet[3188]: W0706 23:56:44.235253 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.235381 kubelet[3188]: E0706 23:56:44.235341 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.235554 kubelet[3188]: E0706 23:56:44.235536 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.235554 kubelet[3188]: W0706 23:56:44.235550 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.235736 kubelet[3188]: E0706 23:56:44.235617 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.235794 kubelet[3188]: E0706 23:56:44.235786 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.235851 kubelet[3188]: W0706 23:56:44.235797 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.235851 kubelet[3188]: E0706 23:56:44.235818 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.236109 kubelet[3188]: E0706 23:56:44.236091 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.236109 kubelet[3188]: W0706 23:56:44.236104 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.236226 kubelet[3188]: E0706 23:56:44.236125 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.236841 kubelet[3188]: E0706 23:56:44.236814 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.236841 kubelet[3188]: W0706 23:56:44.236829 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.237659 kubelet[3188]: E0706 23:56:44.237612 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.237984 kubelet[3188]: E0706 23:56:44.237971 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.238068 kubelet[3188]: W0706 23:56:44.237985 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.238149 kubelet[3188]: E0706 23:56:44.238083 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.238252 kubelet[3188]: E0706 23:56:44.238234 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.238252 kubelet[3188]: W0706 23:56:44.238251 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.238389 kubelet[3188]: E0706 23:56:44.238268 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.238596 kubelet[3188]: E0706 23:56:44.238580 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.238596 kubelet[3188]: W0706 23:56:44.238595 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.238732 kubelet[3188]: E0706 23:56:44.238622 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.238891 kubelet[3188]: E0706 23:56:44.238875 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.238942 kubelet[3188]: W0706 23:56:44.238897 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.238942 kubelet[3188]: E0706 23:56:44.238925 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.239373 kubelet[3188]: E0706 23:56:44.239354 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.239373 kubelet[3188]: W0706 23:56:44.239370 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.239513 kubelet[3188]: E0706 23:56:44.239390 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.239682 kubelet[3188]: E0706 23:56:44.239666 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.239682 kubelet[3188]: W0706 23:56:44.239680 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.240034 kubelet[3188]: E0706 23:56:44.239704 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.240411 kubelet[3188]: E0706 23:56:44.240386 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.240411 kubelet[3188]: W0706 23:56:44.240407 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.240603 kubelet[3188]: E0706 23:56:44.240450 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.240723 kubelet[3188]: E0706 23:56:44.240704 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:44.240723 kubelet[3188]: W0706 23:56:44.240719 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:44.240798 kubelet[3188]: E0706 23:56:44.240733 3188 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:44.307392 systemd[1]: run-containerd-runc-k8s.io-767879674509bf054e229e0beac3d1ade682eedc637b26bd64ae9cfe8a95631a-runc.5f4dvt.mount: Deactivated successfully. Jul 6 23:56:44.530372 containerd[1734]: time="2025-07-06T23:56:44.529667582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:44.536532 containerd[1734]: time="2025-07-06T23:56:44.536339364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 6 23:56:44.548279 containerd[1734]: time="2025-07-06T23:56:44.547603502Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:44.555547 containerd[1734]: time="2025-07-06T23:56:44.554914092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:44.556301 containerd[1734]: time="2025-07-06T23:56:44.556256308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.25562941s" Jul 6 23:56:44.556451 containerd[1734]: time="2025-07-06T23:56:44.556428810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 6 23:56:44.560798 containerd[1734]: time="2025-07-06T23:56:44.560764663Z" level=info msg="CreateContainer within sandbox \"c363621f03db73fb6b97cf583ea7fab31e64e2c40ae6efebe794ce546bb9ba51\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:56:44.618009 containerd[1734]: time="2025-07-06T23:56:44.617963065Z" level=info msg="CreateContainer within sandbox \"c363621f03db73fb6b97cf583ea7fab31e64e2c40ae6efebe794ce546bb9ba51\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5f8d0633d0d8811d9c28d896c2e6b66ff79290b3313d7088ec22d4921eb5683d\"" Jul 6 23:56:44.620513 containerd[1734]: time="2025-07-06T23:56:44.618775775Z" level=info msg="StartContainer for \"5f8d0633d0d8811d9c28d896c2e6b66ff79290b3313d7088ec22d4921eb5683d\"" Jul 6 23:56:44.669701 systemd[1]: Started cri-containerd-5f8d0633d0d8811d9c28d896c2e6b66ff79290b3313d7088ec22d4921eb5683d.scope - libcontainer container 5f8d0633d0d8811d9c28d896c2e6b66ff79290b3313d7088ec22d4921eb5683d. Jul 6 23:56:44.700708 containerd[1734]: time="2025-07-06T23:56:44.700643980Z" level=info msg="StartContainer for \"5f8d0633d0d8811d9c28d896c2e6b66ff79290b3313d7088ec22d4921eb5683d\" returns successfully" Jul 6 23:56:44.714148 systemd[1]: cri-containerd-5f8d0633d0d8811d9c28d896c2e6b66ff79290b3313d7088ec22d4921eb5683d.scope: Deactivated successfully. Jul 6 23:56:44.744440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f8d0633d0d8811d9c28d896c2e6b66ff79290b3313d7088ec22d4921eb5683d-rootfs.mount: Deactivated successfully. Jul 6 23:56:45.420795 kubelet[3188]: I0706 23:56:45.139921 3188 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:56:45.420795 kubelet[3188]: I0706 23:56:45.166900 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75d5c8b566-kdxt8" podStartSLOduration=2.688416284 podStartE2EDuration="5.166875502s" podCreationTimestamp="2025-07-06 23:56:40 +0000 UTC" firstStartedPulling="2025-07-06 23:56:40.821746775 +0000 UTC m=+20.891342236" lastFinishedPulling="2025-07-06 23:56:43.300205993 +0000 UTC m=+23.369801454" observedRunningTime="2025-07-06 23:56:44.153381164 +0000 UTC m=+24.222976625" watchObservedRunningTime="2025-07-06 23:56:45.166875502 +0000 UTC m=+25.236471063" Jul 6 23:56:46.038591 kubelet[3188]: E0706 23:56:46.037259 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vfbs" podUID="17855cdd-9fca-46b9-9af2-cad254d32cd1" Jul 6 23:56:46.182317 containerd[1734]: time="2025-07-06T23:56:46.182241463Z" level=info msg="shim disconnected" id=5f8d0633d0d8811d9c28d896c2e6b66ff79290b3313d7088ec22d4921eb5683d namespace=k8s.io Jul 6 23:56:46.182317 containerd[1734]: time="2025-07-06T23:56:46.182307864Z" level=warning msg="cleaning up after shim disconnected" id=5f8d0633d0d8811d9c28d896c2e6b66ff79290b3313d7088ec22d4921eb5683d namespace=k8s.io Jul 6 23:56:46.182317 containerd[1734]: time="2025-07-06T23:56:46.182320864Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:56:46.195884 containerd[1734]: time="2025-07-06T23:56:46.195833530Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:56:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:56:47.148777 containerd[1734]: time="2025-07-06T23:56:47.148279072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:56:48.039014 kubelet[3188]: E0706 23:56:48.037912 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vfbs" podUID="17855cdd-9fca-46b9-9af2-cad254d32cd1" Jul 6 23:56:50.038043 kubelet[3188]: E0706 23:56:50.037987 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vfbs" podUID="17855cdd-9fca-46b9-9af2-cad254d32cd1" Jul 6 23:56:50.390534 containerd[1734]: time="2025-07-06T23:56:50.390453228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:50.392726 containerd[1734]: time="2025-07-06T23:56:50.392571754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 6 23:56:50.397076 containerd[1734]: time="2025-07-06T23:56:50.396791005Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:50.401928 containerd[1734]: time="2025-07-06T23:56:50.401887866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:50.402830 containerd[1734]: time="2025-07-06T23:56:50.402793577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.254470004s" Jul 6 23:56:50.402934 containerd[1734]: time="2025-07-06T23:56:50.402836178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 6 23:56:50.406430 containerd[1734]: time="2025-07-06T23:56:50.406393121Z" level=info msg="CreateContainer within sandbox \"c363621f03db73fb6b97cf583ea7fab31e64e2c40ae6efebe794ce546bb9ba51\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:56:50.467058 containerd[1734]: time="2025-07-06T23:56:50.467006555Z" level=info msg="CreateContainer within sandbox \"c363621f03db73fb6b97cf583ea7fab31e64e2c40ae6efebe794ce546bb9ba51\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5afbee1e77da79be57637472507864b60817fc355279def622d706f382f8fd6f\"" Jul 6 23:56:50.469532 containerd[1734]: time="2025-07-06T23:56:50.468479473Z" level=info msg="StartContainer for \"5afbee1e77da79be57637472507864b60817fc355279def622d706f382f8fd6f\"" Jul 6 23:56:50.502699 systemd[1]: run-containerd-runc-k8s.io-5afbee1e77da79be57637472507864b60817fc355279def622d706f382f8fd6f-runc.KcZhs4.mount: Deactivated successfully. Jul 6 23:56:50.507918 systemd[1]: Started cri-containerd-5afbee1e77da79be57637472507864b60817fc355279def622d706f382f8fd6f.scope - libcontainer container 5afbee1e77da79be57637472507864b60817fc355279def622d706f382f8fd6f. Jul 6 23:56:50.539811 containerd[1734]: time="2025-07-06T23:56:50.539762036Z" level=info msg="StartContainer for \"5afbee1e77da79be57637472507864b60817fc355279def622d706f382f8fd6f\" returns successfully" Jul 6 23:56:52.037859 kubelet[3188]: E0706 23:56:52.037801 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vfbs" podUID="17855cdd-9fca-46b9-9af2-cad254d32cd1" Jul 6 23:56:52.273983 systemd[1]: cri-containerd-5afbee1e77da79be57637472507864b60817fc355279def622d706f382f8fd6f.scope: Deactivated successfully. Jul 6 23:56:52.284425 kubelet[3188]: I0706 23:56:52.284382 3188 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:56:52.310142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5afbee1e77da79be57637472507864b60817fc355279def622d706f382f8fd6f-rootfs.mount: Deactivated successfully. Jul 6 23:56:52.381053 systemd[1]: Created slice kubepods-burstable-pod13c25702_c1ec_47b6_aa0c_d133d6bd76b6.slice - libcontainer container kubepods-burstable-pod13c25702_c1ec_47b6_aa0c_d133d6bd76b6.slice. Jul 6 23:56:52.398175 systemd[1]: Created slice kubepods-burstable-pod68b343a0_0dad_4108_a9c1_a80798ac4e3d.slice - libcontainer container kubepods-burstable-pod68b343a0_0dad_4108_a9c1_a80798ac4e3d.slice. Jul 6 23:56:52.870084 kubelet[3188]: I0706 23:56:52.393430 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl49c\" (UniqueName: \"kubernetes.io/projected/245f3ff0-b96e-4a8c-bc57-a727dc8518df-kube-api-access-wl49c\") pod \"whisker-658968477d-kls7p\" (UID: \"245f3ff0-b96e-4a8c-bc57-a727dc8518df\") " pod="calico-system/whisker-658968477d-kls7p" Jul 6 23:56:52.870084 kubelet[3188]: I0706 23:56:52.393796 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68b343a0-0dad-4108-a9c1-a80798ac4e3d-config-volume\") pod \"coredns-7c65d6cfc9-qrbvq\" (UID: \"68b343a0-0dad-4108-a9c1-a80798ac4e3d\") " pod="kube-system/coredns-7c65d6cfc9-qrbvq" Jul 6 23:56:52.870084 kubelet[3188]: I0706 23:56:52.393851 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtbc2\" (UniqueName: \"kubernetes.io/projected/68b343a0-0dad-4108-a9c1-a80798ac4e3d-kube-api-access-mtbc2\") pod \"coredns-7c65d6cfc9-qrbvq\" (UID: \"68b343a0-0dad-4108-a9c1-a80798ac4e3d\") " pod="kube-system/coredns-7c65d6cfc9-qrbvq" Jul 6 23:56:52.870084 kubelet[3188]: I0706 23:56:52.393878 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13c25702-c1ec-47b6-aa0c-d133d6bd76b6-config-volume\") pod \"coredns-7c65d6cfc9-8bb5r\" (UID: \"13c25702-c1ec-47b6-aa0c-d133d6bd76b6\") " pod="kube-system/coredns-7c65d6cfc9-8bb5r" Jul 6 23:56:52.870084 kubelet[3188]: I0706 23:56:52.393903 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/245f3ff0-b96e-4a8c-bc57-a727dc8518df-whisker-ca-bundle\") pod \"whisker-658968477d-kls7p\" (UID: \"245f3ff0-b96e-4a8c-bc57-a727dc8518df\") " pod="calico-system/whisker-658968477d-kls7p" Jul 6 23:56:52.410282 systemd[1]: Created slice kubepods-besteffort-pod6f831b75_68df_43d2_bb47_b29865b10566.slice - libcontainer container kubepods-besteffort-pod6f831b75_68df_43d2_bb47_b29865b10566.slice. Jul 6 23:56:52.870541 kubelet[3188]: I0706 23:56:52.393930 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnzfd\" (UniqueName: \"kubernetes.io/projected/13c25702-c1ec-47b6-aa0c-d133d6bd76b6-kube-api-access-lnzfd\") pod \"coredns-7c65d6cfc9-8bb5r\" (UID: \"13c25702-c1ec-47b6-aa0c-d133d6bd76b6\") " pod="kube-system/coredns-7c65d6cfc9-8bb5r" Jul 6 23:56:52.870541 kubelet[3188]: I0706 23:56:52.393956 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88tkz\" (UniqueName: \"kubernetes.io/projected/6f831b75-68df-43d2-bb47-b29865b10566-kube-api-access-88tkz\") pod \"calico-apiserver-6cbd67b8cf-chfgm\" (UID: \"6f831b75-68df-43d2-bb47-b29865b10566\") " pod="calico-apiserver/calico-apiserver-6cbd67b8cf-chfgm" Jul 6 23:56:52.870541 kubelet[3188]: I0706 23:56:52.393989 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6f831b75-68df-43d2-bb47-b29865b10566-calico-apiserver-certs\") pod \"calico-apiserver-6cbd67b8cf-chfgm\" (UID: \"6f831b75-68df-43d2-bb47-b29865b10566\") " pod="calico-apiserver/calico-apiserver-6cbd67b8cf-chfgm" Jul 6 23:56:52.870541 kubelet[3188]: I0706 23:56:52.394013 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/245f3ff0-b96e-4a8c-bc57-a727dc8518df-whisker-backend-key-pair\") pod \"whisker-658968477d-kls7p\" (UID: \"245f3ff0-b96e-4a8c-bc57-a727dc8518df\") " pod="calico-system/whisker-658968477d-kls7p" Jul 6 23:56:52.870541 kubelet[3188]: I0706 23:56:52.494324 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5edfa730-d45f-46ca-a7ed-ef497ff8b782-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-kl5jd\" (UID: \"5edfa730-d45f-46ca-a7ed-ef497ff8b782\") " pod="calico-system/goldmane-58fd7646b9-kl5jd" Jul 6 23:56:52.421952 systemd[1]: Created slice kubepods-besteffort-pod245f3ff0_b96e_4a8c_bc57_a727dc8518df.slice - libcontainer container kubepods-besteffort-pod245f3ff0_b96e_4a8c_bc57_a727dc8518df.slice. Jul 6 23:56:52.870859 kubelet[3188]: I0706 23:56:52.494386 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5edfa730-d45f-46ca-a7ed-ef497ff8b782-config\") pod \"goldmane-58fd7646b9-kl5jd\" (UID: \"5edfa730-d45f-46ca-a7ed-ef497ff8b782\") " pod="calico-system/goldmane-58fd7646b9-kl5jd" Jul 6 23:56:52.870859 kubelet[3188]: I0706 23:56:52.494413 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/deec6d87-2985-4168-ab6f-d727fa58f2d2-calico-apiserver-certs\") pod \"calico-apiserver-6cbd67b8cf-wpmx9\" (UID: \"deec6d87-2985-4168-ab6f-d727fa58f2d2\") " pod="calico-apiserver/calico-apiserver-6cbd67b8cf-wpmx9" Jul 6 23:56:52.870859 kubelet[3188]: I0706 23:56:52.494495 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l2w4\" (UniqueName: \"kubernetes.io/projected/deec6d87-2985-4168-ab6f-d727fa58f2d2-kube-api-access-5l2w4\") pod \"calico-apiserver-6cbd67b8cf-wpmx9\" (UID: \"deec6d87-2985-4168-ab6f-d727fa58f2d2\") " pod="calico-apiserver/calico-apiserver-6cbd67b8cf-wpmx9" Jul 6 23:56:52.870859 kubelet[3188]: I0706 23:56:52.496301 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5065e550-3d85-474c-9080-cbf916c3e61c-tigera-ca-bundle\") pod \"calico-kube-controllers-5647f696c4-hvltb\" (UID: \"5065e550-3d85-474c-9080-cbf916c3e61c\") " pod="calico-system/calico-kube-controllers-5647f696c4-hvltb" Jul 6 23:56:52.870859 kubelet[3188]: I0706 23:56:52.496355 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czg82\" (UniqueName: \"kubernetes.io/projected/5edfa730-d45f-46ca-a7ed-ef497ff8b782-kube-api-access-czg82\") pod \"goldmane-58fd7646b9-kl5jd\" (UID: \"5edfa730-d45f-46ca-a7ed-ef497ff8b782\") " pod="calico-system/goldmane-58fd7646b9-kl5jd" Jul 6 23:56:52.434483 systemd[1]: Created slice kubepods-besteffort-pod5065e550_3d85_474c_9080_cbf916c3e61c.slice - libcontainer container kubepods-besteffort-pod5065e550_3d85_474c_9080_cbf916c3e61c.slice. Jul 6 23:56:52.871139 kubelet[3188]: I0706 23:56:52.496522 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbd2n\" (UniqueName: \"kubernetes.io/projected/5065e550-3d85-474c-9080-cbf916c3e61c-kube-api-access-kbd2n\") pod \"calico-kube-controllers-5647f696c4-hvltb\" (UID: \"5065e550-3d85-474c-9080-cbf916c3e61c\") " pod="calico-system/calico-kube-controllers-5647f696c4-hvltb" Jul 6 23:56:52.871139 kubelet[3188]: I0706 23:56:52.496668 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5edfa730-d45f-46ca-a7ed-ef497ff8b782-goldmane-key-pair\") pod \"goldmane-58fd7646b9-kl5jd\" (UID: \"5edfa730-d45f-46ca-a7ed-ef497ff8b782\") " pod="calico-system/goldmane-58fd7646b9-kl5jd" Jul 6 23:56:52.442936 systemd[1]: Created slice kubepods-besteffort-poddeec6d87_2985_4168_ab6f_d727fa58f2d2.slice - libcontainer container kubepods-besteffort-poddeec6d87_2985_4168_ab6f_d727fa58f2d2.slice. Jul 6 23:56:52.453594 systemd[1]: Created slice kubepods-besteffort-pod5edfa730_d45f_46ca_a7ed_ef497ff8b782.slice - libcontainer container kubepods-besteffort-pod5edfa730_d45f_46ca_a7ed_ef497ff8b782.slice. Jul 6 23:56:53.174621 containerd[1734]: time="2025-07-06T23:56:53.174457936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8bb5r,Uid:13c25702-c1ec-47b6-aa0c-d133d6bd76b6,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:53.175180 containerd[1734]: time="2025-07-06T23:56:53.175082244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbd67b8cf-chfgm,Uid:6f831b75-68df-43d2-bb47-b29865b10566,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:56:53.181688 containerd[1734]: time="2025-07-06T23:56:53.181647323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kl5jd,Uid:5edfa730-d45f-46ca-a7ed-ef497ff8b782,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:53.192986 containerd[1734]: time="2025-07-06T23:56:53.192948660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qrbvq,Uid:68b343a0-0dad-4108-a9c1-a80798ac4e3d,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:53.198857 containerd[1734]: time="2025-07-06T23:56:53.198814931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbd67b8cf-wpmx9,Uid:deec6d87-2985-4168-ab6f-d727fa58f2d2,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:56:53.200770 containerd[1734]: time="2025-07-06T23:56:53.200730954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658968477d-kls7p,Uid:245f3ff0-b96e-4a8c-bc57-a727dc8518df,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:53.208820 containerd[1734]: time="2025-07-06T23:56:53.208770152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5647f696c4-hvltb,Uid:5065e550-3d85-474c-9080-cbf916c3e61c,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:53.517549 containerd[1734]: time="2025-07-06T23:56:53.517255187Z" level=info msg="shim disconnected" id=5afbee1e77da79be57637472507864b60817fc355279def622d706f382f8fd6f namespace=k8s.io Jul 6 23:56:53.517549 containerd[1734]: time="2025-07-06T23:56:53.517304287Z" level=warning msg="cleaning up after shim disconnected" id=5afbee1e77da79be57637472507864b60817fc355279def622d706f382f8fd6f namespace=k8s.io Jul 6 23:56:53.517549 containerd[1734]: time="2025-07-06T23:56:53.517316288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:56:53.917685 containerd[1734]: time="2025-07-06T23:56:53.917620834Z" level=error msg="Failed to destroy network for sandbox \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:53.921707 containerd[1734]: time="2025-07-06T23:56:53.921230778Z" level=error msg="encountered an error cleaning up failed sandbox \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:53.921707 containerd[1734]: time="2025-07-06T23:56:53.921309479Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5647f696c4-hvltb,Uid:5065e550-3d85-474c-9080-cbf916c3e61c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:53.922379 kubelet[3188]: E0706 23:56:53.921588 3188 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:53.922379 kubelet[3188]: E0706 23:56:53.922187 3188 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5647f696c4-hvltb" Jul 6 23:56:53.922379 kubelet[3188]: E0706 23:56:53.922235 3188 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5647f696c4-hvltb" Jul 6 23:56:53.925004 kubelet[3188]: E0706 23:56:53.923229 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5647f696c4-hvltb_calico-system(5065e550-3d85-474c-9080-cbf916c3e61c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5647f696c4-hvltb_calico-system(5065e550-3d85-474c-9080-cbf916c3e61c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5647f696c4-hvltb" podUID="5065e550-3d85-474c-9080-cbf916c3e61c" Jul 6 23:56:53.972955 containerd[1734]: time="2025-07-06T23:56:53.972758202Z" level=error msg="Failed to destroy network for sandbox \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:53.974270 containerd[1734]: time="2025-07-06T23:56:53.974132719Z" level=error msg="encountered an error cleaning up failed sandbox \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:53.974816 containerd[1734]: time="2025-07-06T23:56:53.974567224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658968477d-kls7p,Uid:245f3ff0-b96e-4a8c-bc57-a727dc8518df,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:53.975652 kubelet[3188]: E0706 23:56:53.975435 3188 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:53.975652 kubelet[3188]: E0706 23:56:53.975612 3188 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-658968477d-kls7p" Jul 6 23:56:53.977131 kubelet[3188]: E0706 23:56:53.975830 3188 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-658968477d-kls7p" Jul 6 23:56:53.977889 kubelet[3188]: E0706 23:56:53.975954 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-658968477d-kls7p_calico-system(245f3ff0-b96e-4a8c-bc57-a727dc8518df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-658968477d-kls7p_calico-system(245f3ff0-b96e-4a8c-bc57-a727dc8518df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-658968477d-kls7p" podUID="245f3ff0-b96e-4a8c-bc57-a727dc8518df" Jul 6 23:56:53.987714 containerd[1734]: time="2025-07-06T23:56:53.987653182Z" level=error msg="Failed to destroy network for sandbox \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:53.988807 containerd[1734]: time="2025-07-06T23:56:53.988677095Z" level=error msg="encountered an error cleaning up failed sandbox \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:53.988991 containerd[1734]: time="2025-07-06T23:56:53.988958398Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbd67b8cf-chfgm,Uid:6f831b75-68df-43d2-bb47-b29865b10566,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:53.989701 kubelet[3188]: E0706 23:56:53.989580 3188 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:53.991000 kubelet[3188]: E0706 23:56:53.989836 3188 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbd67b8cf-chfgm" Jul 6 23:56:53.991000 kubelet[3188]: E0706 23:56:53.989879 3188 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbd67b8cf-chfgm" Jul 6 23:56:53.991233 kubelet[3188]: E0706 23:56:53.989961 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbd67b8cf-chfgm_calico-apiserver(6f831b75-68df-43d2-bb47-b29865b10566)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbd67b8cf-chfgm_calico-apiserver(6f831b75-68df-43d2-bb47-b29865b10566)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbd67b8cf-chfgm" podUID="6f831b75-68df-43d2-bb47-b29865b10566" Jul 6 23:56:54.013493 containerd[1734]: time="2025-07-06T23:56:54.012958289Z" level=error msg="Failed to destroy network for sandbox \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.013493 containerd[1734]: time="2025-07-06T23:56:54.013363494Z" level=error msg="encountered an error cleaning up failed sandbox \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.013493 containerd[1734]: time="2025-07-06T23:56:54.013429994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbd67b8cf-wpmx9,Uid:deec6d87-2985-4168-ab6f-d727fa58f2d2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.014504 kubelet[3188]: E0706 23:56:54.014050 3188 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.014504 kubelet[3188]: E0706 23:56:54.014120 3188 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbd67b8cf-wpmx9" Jul 6 23:56:54.014504 kubelet[3188]: E0706 23:56:54.014148 3188 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbd67b8cf-wpmx9" Jul 6 23:56:54.014736 kubelet[3188]: E0706 23:56:54.014211 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbd67b8cf-wpmx9_calico-apiserver(deec6d87-2985-4168-ab6f-d727fa58f2d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbd67b8cf-wpmx9_calico-apiserver(deec6d87-2985-4168-ab6f-d727fa58f2d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbd67b8cf-wpmx9" podUID="deec6d87-2985-4168-ab6f-d727fa58f2d2" Jul 6 23:56:54.019179 containerd[1734]: time="2025-07-06T23:56:54.019130463Z" level=error msg="Failed to destroy network for sandbox \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.020425 containerd[1734]: time="2025-07-06T23:56:54.020248277Z" level=error msg="encountered an error cleaning up failed sandbox \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.020835 containerd[1734]: time="2025-07-06T23:56:54.020704483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8bb5r,Uid:13c25702-c1ec-47b6-aa0c-d133d6bd76b6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.022302 kubelet[3188]: E0706 23:56:54.021262 3188 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.022302 kubelet[3188]: E0706 23:56:54.021335 3188 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8bb5r" Jul 6 23:56:54.022302 kubelet[3188]: E0706 23:56:54.021362 3188 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8bb5r" Jul 6 23:56:54.022513 kubelet[3188]: E0706 23:56:54.021419 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-8bb5r_kube-system(13c25702-c1ec-47b6-aa0c-d133d6bd76b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-8bb5r_kube-system(13c25702-c1ec-47b6-aa0c-d133d6bd76b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8bb5r" podUID="13c25702-c1ec-47b6-aa0c-d133d6bd76b6" Jul 6 23:56:54.022790 containerd[1734]: time="2025-07-06T23:56:54.022757507Z" level=error msg="Failed to destroy network for sandbox \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.024087 containerd[1734]: time="2025-07-06T23:56:54.024048923Z" level=error msg="encountered an error cleaning up failed sandbox \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.024169 containerd[1734]: time="2025-07-06T23:56:54.024112524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kl5jd,Uid:5edfa730-d45f-46ca-a7ed-ef497ff8b782,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.024879 kubelet[3188]: E0706 23:56:54.024669 3188 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.024879 kubelet[3188]: E0706 23:56:54.024738 3188 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-kl5jd" Jul 6 23:56:54.024879 kubelet[3188]: E0706 23:56:54.024763 3188 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-kl5jd" Jul 6 23:56:54.025155 kubelet[3188]: E0706 23:56:54.024824 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-kl5jd_calico-system(5edfa730-d45f-46ca-a7ed-ef497ff8b782)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-kl5jd_calico-system(5edfa730-d45f-46ca-a7ed-ef497ff8b782)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-kl5jd" podUID="5edfa730-d45f-46ca-a7ed-ef497ff8b782" Jul 6 23:56:54.025669 containerd[1734]: time="2025-07-06T23:56:54.025622042Z" level=error msg="Failed to destroy network for sandbox \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.026535 containerd[1734]: time="2025-07-06T23:56:54.026164749Z" level=error msg="encountered an error cleaning up failed sandbox \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.026535 containerd[1734]: time="2025-07-06T23:56:54.026223249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qrbvq,Uid:68b343a0-0dad-4108-a9c1-a80798ac4e3d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.027504 kubelet[3188]: E0706 23:56:54.026724 3188 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.027504 kubelet[3188]: E0706 23:56:54.026770 3188 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qrbvq" Jul 6 23:56:54.027504 kubelet[3188]: E0706 23:56:54.026813 3188 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qrbvq" Jul 6 23:56:54.027623 kubelet[3188]: E0706 23:56:54.026898 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-qrbvq_kube-system(68b343a0-0dad-4108-a9c1-a80798ac4e3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-qrbvq_kube-system(68b343a0-0dad-4108-a9c1-a80798ac4e3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qrbvq" podUID="68b343a0-0dad-4108-a9c1-a80798ac4e3d" Jul 6 23:56:54.044008 systemd[1]: Created slice kubepods-besteffort-pod17855cdd_9fca_46b9_9af2_cad254d32cd1.slice - libcontainer container kubepods-besteffort-pod17855cdd_9fca_46b9_9af2_cad254d32cd1.slice. Jul 6 23:56:54.048963 containerd[1734]: time="2025-07-06T23:56:54.048858523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vfbs,Uid:17855cdd-9fca-46b9-9af2-cad254d32cd1,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:54.132056 containerd[1734]: time="2025-07-06T23:56:54.131901629Z" level=error msg="Failed to destroy network for sandbox \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.132552 containerd[1734]: time="2025-07-06T23:56:54.132511536Z" level=error msg="encountered an error cleaning up failed sandbox \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.132692 containerd[1734]: time="2025-07-06T23:56:54.132574837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vfbs,Uid:17855cdd-9fca-46b9-9af2-cad254d32cd1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.132868 kubelet[3188]: E0706 23:56:54.132829 3188 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.132949 kubelet[3188]: E0706 23:56:54.132895 3188 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9vfbs" Jul 6 23:56:54.132949 kubelet[3188]: E0706 23:56:54.132921 3188 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9vfbs" Jul 6 23:56:54.133049 kubelet[3188]: E0706 23:56:54.132983 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9vfbs_calico-system(17855cdd-9fca-46b9-9af2-cad254d32cd1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9vfbs_calico-system(17855cdd-9fca-46b9-9af2-cad254d32cd1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9vfbs" podUID="17855cdd-9fca-46b9-9af2-cad254d32cd1" Jul 6 23:56:54.165642 kubelet[3188]: I0706 23:56:54.165603 3188 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Jul 6 23:56:54.167631 containerd[1734]: time="2025-07-06T23:56:54.166456747Z" level=info msg="StopPodSandbox for \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\"" Jul 6 23:56:54.168251 kubelet[3188]: I0706 23:56:54.167529 3188 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Jul 6 23:56:54.168616 containerd[1734]: time="2025-07-06T23:56:54.168581673Z" level=info msg="StopPodSandbox for \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\"" Jul 6 23:56:54.169235 containerd[1734]: time="2025-07-06T23:56:54.169025978Z" level=info msg="Ensure that sandbox afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd in task-service has been cleanup successfully" Jul 6 23:56:54.169750 containerd[1734]: time="2025-07-06T23:56:54.169652186Z" level=info msg="Ensure that sandbox d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25 in task-service has been cleanup successfully" Jul 6 23:56:54.183175 kubelet[3188]: I0706 23:56:54.182997 3188 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Jul 6 23:56:54.184064 containerd[1734]: time="2025-07-06T23:56:54.183967659Z" level=info msg="StopPodSandbox for \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\"" Jul 6 23:56:54.184653 containerd[1734]: time="2025-07-06T23:56:54.184240763Z" level=info msg="Ensure that sandbox 765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5 in task-service has been cleanup successfully" Jul 6 23:56:54.188562 containerd[1734]: time="2025-07-06T23:56:54.185807982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:56:54.191763 kubelet[3188]: I0706 23:56:54.191735 3188 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Jul 6 23:56:54.195054 containerd[1734]: time="2025-07-06T23:56:54.195015093Z" level=info msg="StopPodSandbox for \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\"" Jul 6 23:56:54.195533 containerd[1734]: time="2025-07-06T23:56:54.195431998Z" level=info msg="Ensure that sandbox 5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e in task-service has been cleanup successfully" Jul 6 23:56:54.197450 kubelet[3188]: I0706 23:56:54.197420 3188 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Jul 6 23:56:54.200493 containerd[1734]: time="2025-07-06T23:56:54.198895840Z" level=info msg="StopPodSandbox for \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\"" Jul 6 23:56:54.200493 containerd[1734]: time="2025-07-06T23:56:54.199101043Z" level=info msg="Ensure that sandbox 78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4 in task-service has been cleanup successfully" Jul 6 23:56:54.203770 kubelet[3188]: I0706 23:56:54.203295 3188 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Jul 6 23:56:54.205732 containerd[1734]: time="2025-07-06T23:56:54.204024002Z" level=info msg="StopPodSandbox for \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\"" Jul 6 23:56:54.205858 kubelet[3188]: I0706 23:56:54.205704 3188 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Jul 6 23:56:54.206255 containerd[1734]: time="2025-07-06T23:56:54.206214129Z" level=info msg="Ensure that sandbox c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403 in task-service has been cleanup successfully" Jul 6 23:56:54.217911 containerd[1734]: time="2025-07-06T23:56:54.217843569Z" level=info msg="StopPodSandbox for \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\"" Jul 6 23:56:54.232348 containerd[1734]: time="2025-07-06T23:56:54.229702213Z" level=info msg="Ensure that sandbox 842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e in task-service has been cleanup successfully" Jul 6 23:56:54.252195 kubelet[3188]: I0706 23:56:54.251571 3188 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Jul 6 23:56:54.254059 containerd[1734]: time="2025-07-06T23:56:54.253920506Z" level=info msg="StopPodSandbox for \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\"" Jul 6 23:56:54.257296 containerd[1734]: time="2025-07-06T23:56:54.257134045Z" level=info msg="Ensure that sandbox 1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1 in task-service has been cleanup successfully" Jul 6 23:56:54.323556 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5-shm.mount: Deactivated successfully. Jul 6 23:56:54.323777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4-shm.mount: Deactivated successfully. Jul 6 23:56:54.382066 containerd[1734]: time="2025-07-06T23:56:54.381762054Z" level=error msg="StopPodSandbox for \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\" failed" error="failed to destroy network for sandbox \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.382771 kubelet[3188]: E0706 23:56:54.382402 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Jul 6 23:56:54.382771 kubelet[3188]: E0706 23:56:54.382518 3188 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4"} Jul 6 23:56:54.382771 kubelet[3188]: E0706 23:56:54.382598 3188 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5065e550-3d85-474c-9080-cbf916c3e61c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:54.382771 kubelet[3188]: E0706 23:56:54.382629 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5065e550-3d85-474c-9080-cbf916c3e61c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5647f696c4-hvltb" podUID="5065e550-3d85-474c-9080-cbf916c3e61c" Jul 6 23:56:54.386645 containerd[1734]: time="2025-07-06T23:56:54.386454411Z" level=error msg="StopPodSandbox for \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\" failed" error="failed to destroy network for sandbox \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.386815 kubelet[3188]: E0706 23:56:54.386731 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Jul 6 23:56:54.386815 kubelet[3188]: E0706 23:56:54.386783 3188 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e"} Jul 6 23:56:54.386949 kubelet[3188]: E0706 23:56:54.386823 3188 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"245f3ff0-b96e-4a8c-bc57-a727dc8518df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:54.386949 kubelet[3188]: E0706 23:56:54.386853 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"245f3ff0-b96e-4a8c-bc57-a727dc8518df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-658968477d-kls7p" podUID="245f3ff0-b96e-4a8c-bc57-a727dc8518df" Jul 6 23:56:54.409797 containerd[1734]: time="2025-07-06T23:56:54.409739593Z" level=error msg="StopPodSandbox for \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\" failed" error="failed to destroy network for sandbox \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.410380 kubelet[3188]: E0706 23:56:54.410198 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Jul 6 23:56:54.410380 kubelet[3188]: E0706 23:56:54.410263 3188 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd"} Jul 6 23:56:54.410380 kubelet[3188]: E0706 23:56:54.410305 3188 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"13c25702-c1ec-47b6-aa0c-d133d6bd76b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:54.410380 kubelet[3188]: E0706 23:56:54.410339 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"13c25702-c1ec-47b6-aa0c-d133d6bd76b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8bb5r" podUID="13c25702-c1ec-47b6-aa0c-d133d6bd76b6" Jul 6 23:56:54.415717 containerd[1734]: time="2025-07-06T23:56:54.415659165Z" level=error msg="StopPodSandbox for \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\" failed" error="failed to destroy network for sandbox \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.416232 kubelet[3188]: E0706 23:56:54.415929 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Jul 6 23:56:54.416232 kubelet[3188]: E0706 23:56:54.415991 3188 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25"} Jul 6 23:56:54.416232 kubelet[3188]: E0706 23:56:54.416036 3188 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68b343a0-0dad-4108-a9c1-a80798ac4e3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:54.416232 kubelet[3188]: E0706 23:56:54.416071 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68b343a0-0dad-4108-a9c1-a80798ac4e3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qrbvq" podUID="68b343a0-0dad-4108-a9c1-a80798ac4e3d" Jul 6 23:56:54.426541 containerd[1734]: time="2025-07-06T23:56:54.424584373Z" level=error msg="StopPodSandbox for \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\" failed" error="failed to destroy network for sandbox \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.426674 kubelet[3188]: E0706 23:56:54.424934 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Jul 6 23:56:54.426674 kubelet[3188]: E0706 23:56:54.424986 3188 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403"} Jul 6 23:56:54.426674 kubelet[3188]: E0706 23:56:54.425029 3188 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"17855cdd-9fca-46b9-9af2-cad254d32cd1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:54.426674 kubelet[3188]: E0706 23:56:54.425061 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"17855cdd-9fca-46b9-9af2-cad254d32cd1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9vfbs" podUID="17855cdd-9fca-46b9-9af2-cad254d32cd1" Jul 6 23:56:54.435009 containerd[1734]: time="2025-07-06T23:56:54.434835497Z" level=error msg="StopPodSandbox for \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\" failed" error="failed to destroy network for sandbox \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.435679 kubelet[3188]: E0706 23:56:54.435121 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Jul 6 23:56:54.435679 kubelet[3188]: E0706 23:56:54.435183 3188 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e"} Jul 6 23:56:54.435679 kubelet[3188]: E0706 23:56:54.435229 3188 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f831b75-68df-43d2-bb47-b29865b10566\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:54.435679 kubelet[3188]: E0706 23:56:54.435262 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f831b75-68df-43d2-bb47-b29865b10566\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbd67b8cf-chfgm" podUID="6f831b75-68df-43d2-bb47-b29865b10566" Jul 6 23:56:54.442105 containerd[1734]: time="2025-07-06T23:56:54.442055984Z" level=error msg="StopPodSandbox for \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\" failed" error="failed to destroy network for sandbox \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.442549 kubelet[3188]: E0706 23:56:54.442335 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Jul 6 23:56:54.442549 kubelet[3188]: E0706 23:56:54.442397 3188 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5"} Jul 6 23:56:54.442549 kubelet[3188]: E0706 23:56:54.442433 3188 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"deec6d87-2985-4168-ab6f-d727fa58f2d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:54.442549 kubelet[3188]: E0706 23:56:54.442454 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"deec6d87-2985-4168-ab6f-d727fa58f2d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbd67b8cf-wpmx9" podUID="deec6d87-2985-4168-ab6f-d727fa58f2d2" Jul 6 23:56:54.445941 containerd[1734]: time="2025-07-06T23:56:54.445901631Z" level=error msg="StopPodSandbox for \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\" failed" error="failed to destroy network for sandbox \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:54.446173 kubelet[3188]: E0706 23:56:54.446128 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Jul 6 23:56:54.446314 kubelet[3188]: E0706 23:56:54.446193 3188 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1"} Jul 6 23:56:54.446314 kubelet[3188]: E0706 23:56:54.446280 3188 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5edfa730-d45f-46ca-a7ed-ef497ff8b782\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:54.446421 kubelet[3188]: E0706 23:56:54.446312 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5edfa730-d45f-46ca-a7ed-ef497ff8b782\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-kl5jd" podUID="5edfa730-d45f-46ca-a7ed-ef497ff8b782" Jul 6 23:57:00.521338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3412832383.mount: Deactivated successfully. Jul 6 23:57:00.575170 containerd[1734]: time="2025-07-06T23:57:00.575105600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:00.577226 containerd[1734]: time="2025-07-06T23:57:00.577165224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 6 23:57:00.581484 containerd[1734]: time="2025-07-06T23:57:00.581411874Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:00.587292 containerd[1734]: time="2025-07-06T23:57:00.587227441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:00.588210 containerd[1734]: time="2025-07-06T23:57:00.587833748Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.401980666s" Jul 6 23:57:00.588210 containerd[1734]: time="2025-07-06T23:57:00.587877448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 6 23:57:00.596895 containerd[1734]: time="2025-07-06T23:57:00.596680651Z" level=info msg="CreateContainer within sandbox \"c363621f03db73fb6b97cf583ea7fab31e64e2c40ae6efebe794ce546bb9ba51\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:57:00.647621 containerd[1734]: time="2025-07-06T23:57:00.647576741Z" level=info msg="CreateContainer within sandbox \"c363621f03db73fb6b97cf583ea7fab31e64e2c40ae6efebe794ce546bb9ba51\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4a0dd2bdd7585db6ccb219ad8e07fe1d6741c7e5ba0025ffa8349a02498d76d1\"" Jul 6 23:57:00.649488 containerd[1734]: time="2025-07-06T23:57:00.648275349Z" level=info msg="StartContainer for \"4a0dd2bdd7585db6ccb219ad8e07fe1d6741c7e5ba0025ffa8349a02498d76d1\"" Jul 6 23:57:00.677656 systemd[1]: Started cri-containerd-4a0dd2bdd7585db6ccb219ad8e07fe1d6741c7e5ba0025ffa8349a02498d76d1.scope - libcontainer container 4a0dd2bdd7585db6ccb219ad8e07fe1d6741c7e5ba0025ffa8349a02498d76d1. Jul 6 23:57:00.721586 containerd[1734]: time="2025-07-06T23:57:00.721377096Z" level=info msg="StartContainer for \"4a0dd2bdd7585db6ccb219ad8e07fe1d6741c7e5ba0025ffa8349a02498d76d1\" returns successfully" Jul 6 23:57:00.937975 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:57:00.938145 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:57:01.075597 containerd[1734]: time="2025-07-06T23:57:01.075548001Z" level=info msg="StopPodSandbox for \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\"" Jul 6 23:57:01.248420 containerd[1734]: 2025-07-06 23:57:01.195 [INFO][4416] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Jul 6 23:57:01.248420 containerd[1734]: 2025-07-06 23:57:01.195 [INFO][4416] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" iface="eth0" netns="/var/run/netns/cni-257e7ee2-cab6-ab84-0f01-2e7573195901" Jul 6 23:57:01.248420 containerd[1734]: 2025-07-06 23:57:01.195 [INFO][4416] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" iface="eth0" netns="/var/run/netns/cni-257e7ee2-cab6-ab84-0f01-2e7573195901" Jul 6 23:57:01.248420 containerd[1734]: 2025-07-06 23:57:01.195 [INFO][4416] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" iface="eth0" netns="/var/run/netns/cni-257e7ee2-cab6-ab84-0f01-2e7573195901" Jul 6 23:57:01.248420 containerd[1734]: 2025-07-06 23:57:01.195 [INFO][4416] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Jul 6 23:57:01.248420 containerd[1734]: 2025-07-06 23:57:01.195 [INFO][4416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Jul 6 23:57:01.248420 containerd[1734]: 2025-07-06 23:57:01.233 [INFO][4424] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" HandleID="k8s-pod-network.5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-whisker--658968477d--kls7p-eth0" Jul 6 23:57:01.248420 containerd[1734]: 2025-07-06 23:57:01.233 [INFO][4424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:01.248420 containerd[1734]: 2025-07-06 23:57:01.233 [INFO][4424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:01.248420 containerd[1734]: 2025-07-06 23:57:01.240 [WARNING][4424] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" HandleID="k8s-pod-network.5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-whisker--658968477d--kls7p-eth0" Jul 6 23:57:01.248420 containerd[1734]: 2025-07-06 23:57:01.240 [INFO][4424] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" HandleID="k8s-pod-network.5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-whisker--658968477d--kls7p-eth0" Jul 6 23:57:01.248420 containerd[1734]: 2025-07-06 23:57:01.242 [INFO][4424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:01.248420 containerd[1734]: 2025-07-06 23:57:01.246 [INFO][4416] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Jul 6 23:57:01.248420 containerd[1734]: time="2025-07-06T23:57:01.248224603Z" level=info msg="TearDown network for sandbox \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\" successfully" Jul 6 23:57:01.248420 containerd[1734]: time="2025-07-06T23:57:01.248268303Z" level=info msg="StopPodSandbox for \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\" returns successfully" Jul 6 23:57:01.273795 kubelet[3188]: I0706 23:57:01.273618 3188 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/245f3ff0-b96e-4a8c-bc57-a727dc8518df-whisker-backend-key-pair\") pod \"245f3ff0-b96e-4a8c-bc57-a727dc8518df\" (UID: \"245f3ff0-b96e-4a8c-bc57-a727dc8518df\") " Jul 6 23:57:01.273795 kubelet[3188]: I0706 23:57:01.273671 3188 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/245f3ff0-b96e-4a8c-bc57-a727dc8518df-whisker-ca-bundle\") pod \"245f3ff0-b96e-4a8c-bc57-a727dc8518df\" (UID: \"245f3ff0-b96e-4a8c-bc57-a727dc8518df\") " Jul 6 23:57:01.273795 kubelet[3188]: I0706 23:57:01.273706 3188 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl49c\" (UniqueName: \"kubernetes.io/projected/245f3ff0-b96e-4a8c-bc57-a727dc8518df-kube-api-access-wl49c\") pod \"245f3ff0-b96e-4a8c-bc57-a727dc8518df\" (UID: \"245f3ff0-b96e-4a8c-bc57-a727dc8518df\") " Jul 6 23:57:01.279622 kubelet[3188]: I0706 23:57:01.279584 3188 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/245f3ff0-b96e-4a8c-bc57-a727dc8518df-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "245f3ff0-b96e-4a8c-bc57-a727dc8518df" (UID: "245f3ff0-b96e-4a8c-bc57-a727dc8518df"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:57:01.281537 kubelet[3188]: I0706 23:57:01.280759 3188 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/245f3ff0-b96e-4a8c-bc57-a727dc8518df-kube-api-access-wl49c" (OuterVolumeSpecName: "kube-api-access-wl49c") pod "245f3ff0-b96e-4a8c-bc57-a727dc8518df" (UID: "245f3ff0-b96e-4a8c-bc57-a727dc8518df"). InnerVolumeSpecName "kube-api-access-wl49c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:57:01.284104 kubelet[3188]: I0706 23:57:01.284067 3188 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/245f3ff0-b96e-4a8c-bc57-a727dc8518df-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "245f3ff0-b96e-4a8c-bc57-a727dc8518df" (UID: "245f3ff0-b96e-4a8c-bc57-a727dc8518df"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:57:01.375280 kubelet[3188]: I0706 23:57:01.374428 3188 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl49c\" (UniqueName: \"kubernetes.io/projected/245f3ff0-b96e-4a8c-bc57-a727dc8518df-kube-api-access-wl49c\") on node \"ci-4081.3.4-a-6a836f1a00\" DevicePath \"\"" Jul 6 23:57:01.375280 kubelet[3188]: I0706 23:57:01.374476 3188 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/245f3ff0-b96e-4a8c-bc57-a727dc8518df-whisker-backend-key-pair\") on node \"ci-4081.3.4-a-6a836f1a00\" DevicePath \"\"" Jul 6 23:57:01.375280 kubelet[3188]: I0706 23:57:01.374493 3188 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/245f3ff0-b96e-4a8c-bc57-a727dc8518df-whisker-ca-bundle\") on node \"ci-4081.3.4-a-6a836f1a00\" DevicePath \"\"" Jul 6 23:57:01.520019 systemd[1]: run-netns-cni\x2d257e7ee2\x2dcab6\x2dab84\x2d0f01\x2d2e7573195901.mount: Deactivated successfully. Jul 6 23:57:01.520145 systemd[1]: var-lib-kubelet-pods-245f3ff0\x2db96e\x2d4a8c\x2dbc57\x2da727dc8518df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwl49c.mount: Deactivated successfully. Jul 6 23:57:01.520237 systemd[1]: var-lib-kubelet-pods-245f3ff0\x2db96e\x2d4a8c\x2dbc57\x2da727dc8518df-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:57:01.577487 systemd[1]: Removed slice kubepods-besteffort-pod245f3ff0_b96e_4a8c_bc57_a727dc8518df.slice - libcontainer container kubepods-besteffort-pod245f3ff0_b96e_4a8c_bc57_a727dc8518df.slice. Jul 6 23:57:01.594608 kubelet[3188]: I0706 23:57:01.593817 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qr5bm" podStartSLOduration=2.211844152 podStartE2EDuration="21.593789209s" podCreationTimestamp="2025-07-06 23:56:40 +0000 UTC" firstStartedPulling="2025-07-06 23:56:41.207026904 +0000 UTC m=+21.276622365" lastFinishedPulling="2025-07-06 23:57:00.588971861 +0000 UTC m=+40.658567422" observedRunningTime="2025-07-06 23:57:01.309281811 +0000 UTC m=+41.378877372" watchObservedRunningTime="2025-07-06 23:57:01.593789209 +0000 UTC m=+41.663384670" Jul 6 23:57:01.673803 systemd[1]: Created slice kubepods-besteffort-pod60305c84_fd1f_4167_9a56_18574ff2478f.slice - libcontainer container kubepods-besteffort-pod60305c84_fd1f_4167_9a56_18574ff2478f.slice. Jul 6 23:57:01.676837 kubelet[3188]: I0706 23:57:01.676794 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh8hd\" (UniqueName: \"kubernetes.io/projected/60305c84-fd1f-4167-9a56-18574ff2478f-kube-api-access-nh8hd\") pod \"whisker-67b5fcf874-t6hjw\" (UID: \"60305c84-fd1f-4167-9a56-18574ff2478f\") " pod="calico-system/whisker-67b5fcf874-t6hjw" Jul 6 23:57:01.676990 kubelet[3188]: I0706 23:57:01.676849 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60305c84-fd1f-4167-9a56-18574ff2478f-whisker-ca-bundle\") pod \"whisker-67b5fcf874-t6hjw\" (UID: \"60305c84-fd1f-4167-9a56-18574ff2478f\") " pod="calico-system/whisker-67b5fcf874-t6hjw" Jul 6 23:57:01.676990 kubelet[3188]: I0706 23:57:01.676875 3188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60305c84-fd1f-4167-9a56-18574ff2478f-whisker-backend-key-pair\") pod \"whisker-67b5fcf874-t6hjw\" (UID: \"60305c84-fd1f-4167-9a56-18574ff2478f\") " pod="calico-system/whisker-67b5fcf874-t6hjw" Jul 6 23:57:01.979903 containerd[1734]: time="2025-07-06T23:57:01.979853284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67b5fcf874-t6hjw,Uid:60305c84-fd1f-4167-9a56-18574ff2478f,Namespace:calico-system,Attempt:0,}" Jul 6 23:57:02.051814 kubelet[3188]: I0706 23:57:02.051575 3188 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="245f3ff0-b96e-4a8c-bc57-a727dc8518df" path="/var/lib/kubelet/pods/245f3ff0-b96e-4a8c-bc57-a727dc8518df/volumes" Jul 6 23:57:02.186526 systemd-networkd[1350]: calidf359f73fbe: Link UP Jul 6 23:57:02.186889 systemd-networkd[1350]: calidf359f73fbe: Gained carrier Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.068 [INFO][4452] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.078 [INFO][4452] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--6a836f1a00-k8s-whisker--67b5fcf874--t6hjw-eth0 whisker-67b5fcf874- calico-system 60305c84-fd1f-4167-9a56-18574ff2478f 916 0 2025-07-06 23:57:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:67b5fcf874 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.4-a-6a836f1a00 whisker-67b5fcf874-t6hjw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidf359f73fbe [] [] }} ContainerID="06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" Namespace="calico-system" Pod="whisker-67b5fcf874-t6hjw" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-whisker--67b5fcf874--t6hjw-" Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.078 [INFO][4452] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" Namespace="calico-system" Pod="whisker-67b5fcf874-t6hjw" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-whisker--67b5fcf874--t6hjw-eth0" Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.107 [INFO][4464] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" HandleID="k8s-pod-network.06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" Workload="ci--4081.3.4--a--6a836f1a00-k8s-whisker--67b5fcf874--t6hjw-eth0" Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.107 [INFO][4464] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" HandleID="k8s-pod-network.06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" Workload="ci--4081.3.4--a--6a836f1a00-k8s-whisker--67b5fcf874--t6hjw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd6d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-6a836f1a00", "pod":"whisker-67b5fcf874-t6hjw", "timestamp":"2025-07-06 23:57:02.107706366 +0000 UTC"}, Hostname:"ci-4081.3.4-a-6a836f1a00", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.107 [INFO][4464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.108 [INFO][4464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.108 [INFO][4464] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-6a836f1a00' Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.130 [INFO][4464] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.135 [INFO][4464] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.138 [INFO][4464] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.140 [INFO][4464] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.144 [INFO][4464] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.145 [INFO][4464] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.146 [INFO][4464] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.153 [INFO][4464] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.160 [INFO][4464] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.1/26] block=192.168.92.0/26 handle="k8s-pod-network.06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.160 [INFO][4464] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.1/26] handle="k8s-pod-network.06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.160 [INFO][4464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:02.204352 containerd[1734]: 2025-07-06 23:57:02.160 [INFO][4464] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.1/26] IPv6=[] ContainerID="06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" HandleID="k8s-pod-network.06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" Workload="ci--4081.3.4--a--6a836f1a00-k8s-whisker--67b5fcf874--t6hjw-eth0" Jul 6 23:57:02.205603 containerd[1734]: 2025-07-06 23:57:02.162 [INFO][4452] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" Namespace="calico-system" Pod="whisker-67b5fcf874-t6hjw" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-whisker--67b5fcf874--t6hjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-whisker--67b5fcf874--t6hjw-eth0", GenerateName:"whisker-67b5fcf874-", Namespace:"calico-system", SelfLink:"", UID:"60305c84-fd1f-4167-9a56-18574ff2478f", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67b5fcf874", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"", Pod:"whisker-67b5fcf874-t6hjw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.92.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf359f73fbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:02.205603 containerd[1734]: 2025-07-06 23:57:02.162 [INFO][4452] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.1/32] ContainerID="06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" Namespace="calico-system" Pod="whisker-67b5fcf874-t6hjw" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-whisker--67b5fcf874--t6hjw-eth0" Jul 6 23:57:02.205603 containerd[1734]: 2025-07-06 23:57:02.162 [INFO][4452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf359f73fbe ContainerID="06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" Namespace="calico-system" Pod="whisker-67b5fcf874-t6hjw" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-whisker--67b5fcf874--t6hjw-eth0" Jul 6 23:57:02.205603 containerd[1734]: 2025-07-06 23:57:02.184 [INFO][4452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" Namespace="calico-system" Pod="whisker-67b5fcf874-t6hjw" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-whisker--67b5fcf874--t6hjw-eth0" Jul 6 23:57:02.205603 containerd[1734]: 2025-07-06 23:57:02.184 [INFO][4452] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" Namespace="calico-system" Pod="whisker-67b5fcf874-t6hjw" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-whisker--67b5fcf874--t6hjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-whisker--67b5fcf874--t6hjw-eth0", GenerateName:"whisker-67b5fcf874-", Namespace:"calico-system", SelfLink:"", UID:"60305c84-fd1f-4167-9a56-18574ff2478f", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67b5fcf874", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed", Pod:"whisker-67b5fcf874-t6hjw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.92.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf359f73fbe", MAC:"8a:3e:ed:19:13:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:02.205603 containerd[1734]: 2025-07-06 23:57:02.201 [INFO][4452] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed" Namespace="calico-system" Pod="whisker-67b5fcf874-t6hjw" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-whisker--67b5fcf874--t6hjw-eth0" Jul 6 23:57:02.229745 containerd[1734]: time="2025-07-06T23:57:02.229436577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:02.229745 containerd[1734]: time="2025-07-06T23:57:02.229512578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:02.229745 containerd[1734]: time="2025-07-06T23:57:02.229523278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:02.229745 containerd[1734]: time="2025-07-06T23:57:02.229689680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:02.255825 systemd[1]: Started cri-containerd-06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed.scope - libcontainer container 06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed. Jul 6 23:57:02.303734 containerd[1734]: time="2025-07-06T23:57:02.303690838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67b5fcf874-t6hjw,Uid:60305c84-fd1f-4167-9a56-18574ff2478f,Namespace:calico-system,Attempt:0,} returns sandbox id \"06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed\"" Jul 6 23:57:02.305698 containerd[1734]: time="2025-07-06T23:57:02.305652460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:57:03.293640 systemd-networkd[1350]: calidf359f73fbe: Gained IPv6LL Jul 6 23:57:03.463449 kubelet[3188]: I0706 23:57:03.462587 3188 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:57:03.555861 containerd[1734]: time="2025-07-06T23:57:03.555709818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:03.558018 containerd[1734]: time="2025-07-06T23:57:03.557971243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 6 23:57:03.562934 containerd[1734]: time="2025-07-06T23:57:03.561564984Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:03.567404 containerd[1734]: time="2025-07-06T23:57:03.566541240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:03.567404 containerd[1734]: time="2025-07-06T23:57:03.567230147Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.261536286s" Jul 6 23:57:03.567404 containerd[1734]: time="2025-07-06T23:57:03.567267448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 6 23:57:03.569848 containerd[1734]: time="2025-07-06T23:57:03.569817676Z" level=info msg="CreateContainer within sandbox \"06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:57:03.606521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3684780537.mount: Deactivated successfully. Jul 6 23:57:03.613534 containerd[1734]: time="2025-07-06T23:57:03.613446467Z" level=info msg="CreateContainer within sandbox \"06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7554a6c985b0de5bc348e594fcb41f7d582b99d4e89fee49f10195ecf2d1265f\"" Jul 6 23:57:03.614428 containerd[1734]: time="2025-07-06T23:57:03.614374677Z" level=info msg="StartContainer for \"7554a6c985b0de5bc348e594fcb41f7d582b99d4e89fee49f10195ecf2d1265f\"" Jul 6 23:57:03.672785 systemd[1]: Started cri-containerd-7554a6c985b0de5bc348e594fcb41f7d582b99d4e89fee49f10195ecf2d1265f.scope - libcontainer container 7554a6c985b0de5bc348e594fcb41f7d582b99d4e89fee49f10195ecf2d1265f. Jul 6 23:57:03.738842 containerd[1734]: time="2025-07-06T23:57:03.738797976Z" level=info msg="StartContainer for \"7554a6c985b0de5bc348e594fcb41f7d582b99d4e89fee49f10195ecf2d1265f\" returns successfully" Jul 6 23:57:03.741446 containerd[1734]: time="2025-07-06T23:57:03.741389606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:57:04.081553 kernel: bpftool[4712]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 6 23:57:04.453941 systemd-networkd[1350]: vxlan.calico: Link UP Jul 6 23:57:04.453954 systemd-networkd[1350]: vxlan.calico: Gained carrier Jul 6 23:57:05.853982 systemd-networkd[1350]: vxlan.calico: Gained IPv6LL Jul 6 23:57:05.893351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070191577.mount: Deactivated successfully. Jul 6 23:57:05.973707 containerd[1734]: time="2025-07-06T23:57:05.973651706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:05.976121 containerd[1734]: time="2025-07-06T23:57:05.975939232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 6 23:57:05.979585 containerd[1734]: time="2025-07-06T23:57:05.979524572Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:05.987630 containerd[1734]: time="2025-07-06T23:57:05.987563062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:05.988933 containerd[1734]: time="2025-07-06T23:57:05.988428872Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.246973366s" Jul 6 23:57:05.988933 containerd[1734]: time="2025-07-06T23:57:05.988512873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 6 23:57:05.991352 containerd[1734]: time="2025-07-06T23:57:05.991191303Z" level=info msg="CreateContainer within sandbox \"06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:57:06.031647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3085803774.mount: Deactivated successfully. Jul 6 23:57:06.038956 containerd[1734]: time="2025-07-06T23:57:06.038733338Z" level=info msg="StopPodSandbox for \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\"" Jul 6 23:57:06.040385 containerd[1734]: time="2025-07-06T23:57:06.040045453Z" level=info msg="StopPodSandbox for \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\"" Jul 6 23:57:06.043055 containerd[1734]: time="2025-07-06T23:57:06.043026086Z" level=info msg="StopPodSandbox for \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\"" Jul 6 23:57:06.044797 containerd[1734]: time="2025-07-06T23:57:06.044723305Z" level=info msg="CreateContainer within sandbox \"06797a6d61b015da83d12ee940ba6cf97e74aea56993ff4838a2c5551901f3ed\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"69d9a868ab767fc3ad567543be48f7318eb742791fa7b093a23daba0974459b6\"" Jul 6 23:57:06.045495 containerd[1734]: time="2025-07-06T23:57:06.045244911Z" level=info msg="StartContainer for \"69d9a868ab767fc3ad567543be48f7318eb742791fa7b093a23daba0974459b6\"" Jul 6 23:57:06.123755 systemd[1]: Started cri-containerd-69d9a868ab767fc3ad567543be48f7318eb742791fa7b093a23daba0974459b6.scope - libcontainer container 69d9a868ab767fc3ad567543be48f7318eb742791fa7b093a23daba0974459b6. Jul 6 23:57:06.216050 containerd[1734]: time="2025-07-06T23:57:06.215999131Z" level=info msg="StartContainer for \"69d9a868ab767fc3ad567543be48f7318eb742791fa7b093a23daba0974459b6\" returns successfully" Jul 6 23:57:06.338831 containerd[1734]: 2025-07-06 23:57:06.226 [INFO][4831] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Jul 6 23:57:06.338831 containerd[1734]: 2025-07-06 23:57:06.226 [INFO][4831] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" iface="eth0" netns="/var/run/netns/cni-a42013a4-a424-dac8-1e6a-317e88a78889" Jul 6 23:57:06.338831 containerd[1734]: 2025-07-06 23:57:06.227 [INFO][4831] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" iface="eth0" netns="/var/run/netns/cni-a42013a4-a424-dac8-1e6a-317e88a78889" Jul 6 23:57:06.338831 containerd[1734]: 2025-07-06 23:57:06.227 [INFO][4831] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" iface="eth0" netns="/var/run/netns/cni-a42013a4-a424-dac8-1e6a-317e88a78889" Jul 6 23:57:06.338831 containerd[1734]: 2025-07-06 23:57:06.227 [INFO][4831] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Jul 6 23:57:06.338831 containerd[1734]: 2025-07-06 23:57:06.227 [INFO][4831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Jul 6 23:57:06.338831 containerd[1734]: 2025-07-06 23:57:06.302 [INFO][4876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" HandleID="k8s-pod-network.842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:06.338831 containerd[1734]: 2025-07-06 23:57:06.303 [INFO][4876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:06.338831 containerd[1734]: 2025-07-06 23:57:06.303 [INFO][4876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:06.338831 containerd[1734]: 2025-07-06 23:57:06.315 [WARNING][4876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" HandleID="k8s-pod-network.842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:06.338831 containerd[1734]: 2025-07-06 23:57:06.315 [INFO][4876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" HandleID="k8s-pod-network.842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:06.338831 containerd[1734]: 2025-07-06 23:57:06.330 [INFO][4876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:06.338831 containerd[1734]: 2025-07-06 23:57:06.335 [INFO][4831] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Jul 6 23:57:06.343628 containerd[1734]: time="2025-07-06T23:57:06.343578766Z" level=info msg="TearDown network for sandbox \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\" successfully" Jul 6 23:57:06.343628 containerd[1734]: time="2025-07-06T23:57:06.343618766Z" level=info msg="StopPodSandbox for \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\" returns successfully" Jul 6 23:57:06.344691 containerd[1734]: time="2025-07-06T23:57:06.344660078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbd67b8cf-chfgm,Uid:6f831b75-68df-43d2-bb47-b29865b10566,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:57:06.346473 systemd[1]: run-netns-cni\x2da42013a4\x2da424\x2ddac8\x2d1e6a\x2d317e88a78889.mount: Deactivated successfully. Jul 6 23:57:06.414850 containerd[1734]: 2025-07-06 23:57:06.231 [INFO][4822] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Jul 6 23:57:06.414850 containerd[1734]: 2025-07-06 23:57:06.232 [INFO][4822] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" iface="eth0" netns="/var/run/netns/cni-f298745b-9e97-4a55-52fc-7841a31b80de" Jul 6 23:57:06.414850 containerd[1734]: 2025-07-06 23:57:06.232 [INFO][4822] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" iface="eth0" netns="/var/run/netns/cni-f298745b-9e97-4a55-52fc-7841a31b80de" Jul 6 23:57:06.414850 containerd[1734]: 2025-07-06 23:57:06.233 [INFO][4822] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" iface="eth0" netns="/var/run/netns/cni-f298745b-9e97-4a55-52fc-7841a31b80de" Jul 6 23:57:06.414850 containerd[1734]: 2025-07-06 23:57:06.234 [INFO][4822] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Jul 6 23:57:06.414850 containerd[1734]: 2025-07-06 23:57:06.234 [INFO][4822] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Jul 6 23:57:06.414850 containerd[1734]: 2025-07-06 23:57:06.320 [INFO][4881] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" HandleID="k8s-pod-network.c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Workload="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:06.414850 containerd[1734]: 2025-07-06 23:57:06.320 [INFO][4881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:06.414850 containerd[1734]: 2025-07-06 23:57:06.332 [INFO][4881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:06.414850 containerd[1734]: 2025-07-06 23:57:06.354 [WARNING][4881] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" HandleID="k8s-pod-network.c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Workload="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:06.414850 containerd[1734]: 2025-07-06 23:57:06.354 [INFO][4881] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" HandleID="k8s-pod-network.c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Workload="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:06.414850 containerd[1734]: 2025-07-06 23:57:06.387 [INFO][4881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:06.414850 containerd[1734]: 2025-07-06 23:57:06.406 [INFO][4822] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Jul 6 23:57:06.416386 containerd[1734]: time="2025-07-06T23:57:06.415656576Z" level=info msg="TearDown network for sandbox \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\" successfully" Jul 6 23:57:06.416386 containerd[1734]: time="2025-07-06T23:57:06.415725677Z" level=info msg="StopPodSandbox for \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\" returns successfully" Jul 6 23:57:06.417339 containerd[1734]: time="2025-07-06T23:57:06.417134693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vfbs,Uid:17855cdd-9fca-46b9-9af2-cad254d32cd1,Namespace:calico-system,Attempt:1,}" Jul 6 23:57:06.424966 systemd[1]: run-netns-cni\x2df298745b\x2d9e97\x2d4a55\x2d52fc\x2d7841a31b80de.mount: Deactivated successfully. Jul 6 23:57:06.428427 containerd[1734]: 2025-07-06 23:57:06.261 [INFO][4824] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Jul 6 23:57:06.428427 containerd[1734]: 2025-07-06 23:57:06.261 [INFO][4824] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" iface="eth0" netns="/var/run/netns/cni-353b2203-967f-135b-41c2-00fcbfc985b3" Jul 6 23:57:06.428427 containerd[1734]: 2025-07-06 23:57:06.262 [INFO][4824] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" iface="eth0" netns="/var/run/netns/cni-353b2203-967f-135b-41c2-00fcbfc985b3" Jul 6 23:57:06.428427 containerd[1734]: 2025-07-06 23:57:06.264 [INFO][4824] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" iface="eth0" netns="/var/run/netns/cni-353b2203-967f-135b-41c2-00fcbfc985b3" Jul 6 23:57:06.428427 containerd[1734]: 2025-07-06 23:57:06.264 [INFO][4824] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Jul 6 23:57:06.428427 containerd[1734]: 2025-07-06 23:57:06.264 [INFO][4824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Jul 6 23:57:06.428427 containerd[1734]: 2025-07-06 23:57:06.357 [INFO][4887] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" HandleID="k8s-pod-network.afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:06.428427 containerd[1734]: 2025-07-06 23:57:06.358 [INFO][4887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:06.428427 containerd[1734]: 2025-07-06 23:57:06.387 [INFO][4887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:06.428427 containerd[1734]: 2025-07-06 23:57:06.413 [WARNING][4887] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" HandleID="k8s-pod-network.afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:06.428427 containerd[1734]: 2025-07-06 23:57:06.413 [INFO][4887] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" HandleID="k8s-pod-network.afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:06.428427 containerd[1734]: 2025-07-06 23:57:06.419 [INFO][4887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:06.428427 containerd[1734]: 2025-07-06 23:57:06.423 [INFO][4824] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Jul 6 23:57:06.428427 containerd[1734]: time="2025-07-06T23:57:06.428104716Z" level=info msg="TearDown network for sandbox \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\" successfully" Jul 6 23:57:06.428427 containerd[1734]: time="2025-07-06T23:57:06.428134216Z" level=info msg="StopPodSandbox for \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\" returns successfully" Jul 6 23:57:06.429039 containerd[1734]: time="2025-07-06T23:57:06.428950826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8bb5r,Uid:13c25702-c1ec-47b6-aa0c-d133d6bd76b6,Namespace:kube-system,Attempt:1,}" Jul 6 23:57:06.567995 systemd-networkd[1350]: cali0b5e1c78d4d: Link UP Jul 6 23:57:06.569429 systemd-networkd[1350]: cali0b5e1c78d4d: Gained carrier Jul 6 23:57:06.595601 kubelet[3188]: I0706 23:57:06.594523 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-67b5fcf874-t6hjw" podStartSLOduration=1.910312059 podStartE2EDuration="5.594493487s" podCreationTimestamp="2025-07-06 23:57:01 +0000 UTC" firstStartedPulling="2025-07-06 23:57:02.305257656 +0000 UTC m=+42.374853217" lastFinishedPulling="2025-07-06 23:57:05.989439184 +0000 UTC m=+46.059034645" observedRunningTime="2025-07-06 23:57:06.324049646 +0000 UTC m=+46.393645207" watchObservedRunningTime="2025-07-06 23:57:06.594493487 +0000 UTC m=+46.664089148" Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.461 [INFO][4901] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0 calico-apiserver-6cbd67b8cf- calico-apiserver 6f831b75-68df-43d2-bb47-b29865b10566 943 0 2025-07-06 23:56:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cbd67b8cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-a-6a836f1a00 calico-apiserver-6cbd67b8cf-chfgm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0b5e1c78d4d [] [] }} ContainerID="4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-chfgm" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-" Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.461 [INFO][4901] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-chfgm" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.491 [INFO][4914] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" HandleID="k8s-pod-network.4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.492 [INFO][4914] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" HandleID="k8s-pod-network.4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-a-6a836f1a00", "pod":"calico-apiserver-6cbd67b8cf-chfgm", "timestamp":"2025-07-06 23:57:06.491317327 +0000 UTC"}, Hostname:"ci-4081.3.4-a-6a836f1a00", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.492 [INFO][4914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.492 [INFO][4914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.492 [INFO][4914] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-6a836f1a00' Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.502 [INFO][4914] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.509 [INFO][4914] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.517 [INFO][4914] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.522 [INFO][4914] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.526 [INFO][4914] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.530 [INFO][4914] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.533 [INFO][4914] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54 Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.542 [INFO][4914] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.557 [INFO][4914] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.2/26] block=192.168.92.0/26 handle="k8s-pod-network.4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.558 [INFO][4914] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.2/26] handle="k8s-pod-network.4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.558 [INFO][4914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:06.603765 containerd[1734]: 2025-07-06 23:57:06.558 [INFO][4914] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.2/26] IPv6=[] ContainerID="4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" HandleID="k8s-pod-network.4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:06.604922 containerd[1734]: 2025-07-06 23:57:06.563 [INFO][4901] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-chfgm" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0", GenerateName:"calico-apiserver-6cbd67b8cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f831b75-68df-43d2-bb47-b29865b10566", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbd67b8cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"", Pod:"calico-apiserver-6cbd67b8cf-chfgm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b5e1c78d4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:06.604922 containerd[1734]: 2025-07-06 23:57:06.564 [INFO][4901] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.2/32] ContainerID="4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-chfgm" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:06.604922 containerd[1734]: 2025-07-06 23:57:06.564 [INFO][4901] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b5e1c78d4d ContainerID="4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-chfgm" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:06.604922 containerd[1734]: 2025-07-06 23:57:06.568 [INFO][4901] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-chfgm" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:06.604922 containerd[1734]: 2025-07-06 23:57:06.569 [INFO][4901] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-chfgm" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0", GenerateName:"calico-apiserver-6cbd67b8cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f831b75-68df-43d2-bb47-b29865b10566", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbd67b8cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54", Pod:"calico-apiserver-6cbd67b8cf-chfgm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b5e1c78d4d", MAC:"62:b1:7c:de:d4:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:06.604922 containerd[1734]: 2025-07-06 23:57:06.597 [INFO][4901] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-chfgm" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:06.646422 containerd[1734]: time="2025-07-06T23:57:06.646320070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:06.646622 containerd[1734]: time="2025-07-06T23:57:06.646397671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:06.646622 containerd[1734]: time="2025-07-06T23:57:06.646430571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:06.646622 containerd[1734]: time="2025-07-06T23:57:06.646585673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:06.687112 systemd[1]: Started cri-containerd-4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54.scope - libcontainer container 4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54. Jul 6 23:57:06.746345 containerd[1734]: time="2025-07-06T23:57:06.745614386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbd67b8cf-chfgm,Uid:6f831b75-68df-43d2-bb47-b29865b10566,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54\"" Jul 6 23:57:06.752796 containerd[1734]: time="2025-07-06T23:57:06.752530364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:57:06.773914 systemd-networkd[1350]: cali0a8c920207e: Link UP Jul 6 23:57:06.774138 systemd-networkd[1350]: cali0a8c920207e: Gained carrier Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.599 [INFO][4921] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0 csi-node-driver- calico-system 17855cdd-9fca-46b9-9af2-cad254d32cd1 945 0 2025-07-06 23:56:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.4-a-6a836f1a00 csi-node-driver-9vfbs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0a8c920207e [] [] }} ContainerID="6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" Namespace="calico-system" Pod="csi-node-driver-9vfbs" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-" Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.600 [INFO][4921] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" Namespace="calico-system" Pod="csi-node-driver-9vfbs" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.679 [INFO][4954] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" HandleID="k8s-pod-network.6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" Workload="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.679 [INFO][4954] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" HandleID="k8s-pod-network.6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" Workload="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000342fc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-6a836f1a00", "pod":"csi-node-driver-9vfbs", "timestamp":"2025-07-06 23:57:06.678743534 +0000 UTC"}, Hostname:"ci-4081.3.4-a-6a836f1a00", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.680 [INFO][4954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.680 [INFO][4954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.680 [INFO][4954] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-6a836f1a00' Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.692 [INFO][4954] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.724 [INFO][4954] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.729 [INFO][4954] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.732 [INFO][4954] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.736 [INFO][4954] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.736 [INFO][4954] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.738 [INFO][4954] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.747 [INFO][4954] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.764 [INFO][4954] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.3/26] block=192.168.92.0/26 handle="k8s-pod-network.6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.764 [INFO][4954] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.3/26] handle="k8s-pod-network.6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.764 [INFO][4954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:06.811027 containerd[1734]: 2025-07-06 23:57:06.764 [INFO][4954] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.3/26] IPv6=[] ContainerID="6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" HandleID="k8s-pod-network.6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" Workload="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:06.814594 containerd[1734]: 2025-07-06 23:57:06.768 [INFO][4921] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" Namespace="calico-system" Pod="csi-node-driver-9vfbs" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"17855cdd-9fca-46b9-9af2-cad254d32cd1", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"", Pod:"csi-node-driver-9vfbs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0a8c920207e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:06.814594 containerd[1734]: 2025-07-06 23:57:06.768 [INFO][4921] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.3/32] ContainerID="6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" Namespace="calico-system" Pod="csi-node-driver-9vfbs" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:06.814594 containerd[1734]: 2025-07-06 23:57:06.768 [INFO][4921] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a8c920207e ContainerID="6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" Namespace="calico-system" Pod="csi-node-driver-9vfbs" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:06.814594 containerd[1734]: 2025-07-06 23:57:06.774 [INFO][4921] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" Namespace="calico-system" Pod="csi-node-driver-9vfbs" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:06.814594 containerd[1734]: 2025-07-06 23:57:06.775 [INFO][4921] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" Namespace="calico-system" Pod="csi-node-driver-9vfbs" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"17855cdd-9fca-46b9-9af2-cad254d32cd1", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a", Pod:"csi-node-driver-9vfbs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0a8c920207e", MAC:"fa:c9:8f:e2:42:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:06.814594 containerd[1734]: 2025-07-06 23:57:06.807 [INFO][4921] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a" Namespace="calico-system" Pod="csi-node-driver-9vfbs" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:06.856267 containerd[1734]: time="2025-07-06T23:57:06.856165529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:06.856755 containerd[1734]: time="2025-07-06T23:57:06.856229430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:06.856755 containerd[1734]: time="2025-07-06T23:57:06.856249430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:06.856755 containerd[1734]: time="2025-07-06T23:57:06.856354131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:06.884452 systemd-networkd[1350]: cali5631955af2e: Link UP Jul 6 23:57:06.887794 systemd-networkd[1350]: cali5631955af2e: Gained carrier Jul 6 23:57:06.900053 systemd[1]: Started cri-containerd-6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a.scope - libcontainer container 6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a. Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.597 [INFO][4930] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0 coredns-7c65d6cfc9- kube-system 13c25702-c1ec-47b6-aa0c-d133d6bd76b6 946 0 2025-07-06 23:56:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-a-6a836f1a00 coredns-7c65d6cfc9-8bb5r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5631955af2e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8bb5r" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-" Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.598 [INFO][4930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8bb5r" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.676 [INFO][4949] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" HandleID="k8s-pod-network.fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.678 [INFO][4949] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" HandleID="k8s-pod-network.fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000307930), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-a-6a836f1a00", "pod":"coredns-7c65d6cfc9-8bb5r", "timestamp":"2025-07-06 23:57:06.676401608 +0000 UTC"}, Hostname:"ci-4081.3.4-a-6a836f1a00", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.680 [INFO][4949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.764 [INFO][4949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.765 [INFO][4949] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-6a836f1a00' Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.793 [INFO][4949] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.825 [INFO][4949] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.835 [INFO][4949] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.839 [INFO][4949] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.843 [INFO][4949] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.845 [INFO][4949] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.849 [INFO][4949] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.859 [INFO][4949] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.870 [INFO][4949] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.4/26] block=192.168.92.0/26 handle="k8s-pod-network.fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.870 [INFO][4949] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.4/26] handle="k8s-pod-network.fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.870 [INFO][4949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:06.935236 containerd[1734]: 2025-07-06 23:57:06.871 [INFO][4949] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.4/26] IPv6=[] ContainerID="fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" HandleID="k8s-pod-network.fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:06.936221 containerd[1734]: 2025-07-06 23:57:06.877 [INFO][4930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8bb5r" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"13c25702-c1ec-47b6-aa0c-d133d6bd76b6", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"", Pod:"coredns-7c65d6cfc9-8bb5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5631955af2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:06.936221 containerd[1734]: 2025-07-06 23:57:06.877 [INFO][4930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.4/32] ContainerID="fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8bb5r" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:06.936221 containerd[1734]: 2025-07-06 23:57:06.877 [INFO][4930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5631955af2e ContainerID="fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8bb5r" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:06.936221 containerd[1734]: 2025-07-06 23:57:06.887 [INFO][4930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8bb5r" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:06.936221 containerd[1734]: 2025-07-06 23:57:06.895 [INFO][4930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8bb5r" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"13c25702-c1ec-47b6-aa0c-d133d6bd76b6", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b", Pod:"coredns-7c65d6cfc9-8bb5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5631955af2e", MAC:"9e:a8:22:0a:ce:81", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:06.936221 containerd[1734]: 2025-07-06 23:57:06.929 [INFO][4930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8bb5r" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:06.964702 containerd[1734]: time="2025-07-06T23:57:06.964601249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vfbs,Uid:17855cdd-9fca-46b9-9af2-cad254d32cd1,Namespace:calico-system,Attempt:1,} returns sandbox id \"6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a\"" Jul 6 23:57:06.980514 containerd[1734]: time="2025-07-06T23:57:06.980391826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:06.981451 containerd[1734]: time="2025-07-06T23:57:06.981273336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:06.981750 containerd[1734]: time="2025-07-06T23:57:06.981716341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:06.982009 containerd[1734]: time="2025-07-06T23:57:06.981976644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:07.000033 systemd[1]: Started cri-containerd-fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b.scope - libcontainer container fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b. Jul 6 23:57:07.039991 containerd[1734]: time="2025-07-06T23:57:07.038851583Z" level=info msg="StopPodSandbox for \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\"" Jul 6 23:57:07.041840 containerd[1734]: time="2025-07-06T23:57:07.041788016Z" level=info msg="StopPodSandbox for \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\"" Jul 6 23:57:07.061404 containerd[1734]: time="2025-07-06T23:57:07.061345936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8bb5r,Uid:13c25702-c1ec-47b6-aa0c-d133d6bd76b6,Namespace:kube-system,Attempt:1,} returns sandbox id \"fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b\"" Jul 6 23:57:07.067235 containerd[1734]: time="2025-07-06T23:57:07.067115801Z" level=info msg="CreateContainer within sandbox \"fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:57:07.121416 containerd[1734]: time="2025-07-06T23:57:07.121373611Z" level=info msg="CreateContainer within sandbox \"fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"550a8601e86e8d2d7ea8c052ce7efdd385a6f611897a7c4635835457600cfa88\"" Jul 6 23:57:07.131964 containerd[1734]: time="2025-07-06T23:57:07.131422224Z" level=info msg="StartContainer for \"550a8601e86e8d2d7ea8c052ce7efdd385a6f611897a7c4635835457600cfa88\"" Jul 6 23:57:07.197825 systemd[1]: Started cri-containerd-550a8601e86e8d2d7ea8c052ce7efdd385a6f611897a7c4635835457600cfa88.scope - libcontainer container 550a8601e86e8d2d7ea8c052ce7efdd385a6f611897a7c4635835457600cfa88. Jul 6 23:57:07.220943 containerd[1734]: 2025-07-06 23:57:07.129 [INFO][5120] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Jul 6 23:57:07.220943 containerd[1734]: 2025-07-06 23:57:07.131 [INFO][5120] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" iface="eth0" netns="/var/run/netns/cni-59bbe21f-8d27-2fac-4731-00e487e87ff3" Jul 6 23:57:07.220943 containerd[1734]: 2025-07-06 23:57:07.132 [INFO][5120] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" iface="eth0" netns="/var/run/netns/cni-59bbe21f-8d27-2fac-4731-00e487e87ff3" Jul 6 23:57:07.220943 containerd[1734]: 2025-07-06 23:57:07.132 [INFO][5120] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" iface="eth0" netns="/var/run/netns/cni-59bbe21f-8d27-2fac-4731-00e487e87ff3" Jul 6 23:57:07.220943 containerd[1734]: 2025-07-06 23:57:07.132 [INFO][5120] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Jul 6 23:57:07.220943 containerd[1734]: 2025-07-06 23:57:07.133 [INFO][5120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Jul 6 23:57:07.220943 containerd[1734]: 2025-07-06 23:57:07.199 [INFO][5139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" HandleID="k8s-pod-network.765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:07.220943 containerd[1734]: 2025-07-06 23:57:07.199 [INFO][5139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:07.220943 containerd[1734]: 2025-07-06 23:57:07.199 [INFO][5139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:07.220943 containerd[1734]: 2025-07-06 23:57:07.213 [WARNING][5139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" HandleID="k8s-pod-network.765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:07.220943 containerd[1734]: 2025-07-06 23:57:07.213 [INFO][5139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" HandleID="k8s-pod-network.765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:07.220943 containerd[1734]: 2025-07-06 23:57:07.215 [INFO][5139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:07.220943 containerd[1734]: 2025-07-06 23:57:07.217 [INFO][5120] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Jul 6 23:57:07.222232 containerd[1734]: time="2025-07-06T23:57:07.222095844Z" level=info msg="TearDown network for sandbox \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\" successfully" Jul 6 23:57:07.222232 containerd[1734]: time="2025-07-06T23:57:07.222157545Z" level=info msg="StopPodSandbox for \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\" returns successfully" Jul 6 23:57:07.225217 containerd[1734]: time="2025-07-06T23:57:07.225044177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbd67b8cf-wpmx9,Uid:deec6d87-2985-4168-ab6f-d727fa58f2d2,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:57:07.248330 containerd[1734]: time="2025-07-06T23:57:07.248283938Z" level=info msg="StartContainer for \"550a8601e86e8d2d7ea8c052ce7efdd385a6f611897a7c4635835457600cfa88\" returns successfully" Jul 6 23:57:07.267697 containerd[1734]: 2025-07-06 23:57:07.171 [INFO][5125] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Jul 6 23:57:07.267697 containerd[1734]: 2025-07-06 23:57:07.172 [INFO][5125] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" iface="eth0" netns="/var/run/netns/cni-6e2919a3-924c-359c-01c9-3fc5d5a37341" Jul 6 23:57:07.267697 containerd[1734]: 2025-07-06 23:57:07.173 [INFO][5125] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" iface="eth0" netns="/var/run/netns/cni-6e2919a3-924c-359c-01c9-3fc5d5a37341" Jul 6 23:57:07.267697 containerd[1734]: 2025-07-06 23:57:07.174 [INFO][5125] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" iface="eth0" netns="/var/run/netns/cni-6e2919a3-924c-359c-01c9-3fc5d5a37341" Jul 6 23:57:07.267697 containerd[1734]: 2025-07-06 23:57:07.174 [INFO][5125] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Jul 6 23:57:07.267697 containerd[1734]: 2025-07-06 23:57:07.174 [INFO][5125] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Jul 6 23:57:07.267697 containerd[1734]: 2025-07-06 23:57:07.244 [INFO][5153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" HandleID="k8s-pod-network.78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:07.267697 containerd[1734]: 2025-07-06 23:57:07.244 [INFO][5153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:07.267697 containerd[1734]: 2025-07-06 23:57:07.244 [INFO][5153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:07.267697 containerd[1734]: 2025-07-06 23:57:07.259 [WARNING][5153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" HandleID="k8s-pod-network.78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:07.267697 containerd[1734]: 2025-07-06 23:57:07.259 [INFO][5153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" HandleID="k8s-pod-network.78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:07.267697 containerd[1734]: 2025-07-06 23:57:07.262 [INFO][5153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:07.267697 containerd[1734]: 2025-07-06 23:57:07.264 [INFO][5125] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Jul 6 23:57:07.269155 containerd[1734]: time="2025-07-06T23:57:07.267840658Z" level=info msg="TearDown network for sandbox \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\" successfully" Jul 6 23:57:07.269155 containerd[1734]: time="2025-07-06T23:57:07.267874059Z" level=info msg="StopPodSandbox for \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\" returns successfully" Jul 6 23:57:07.292730 containerd[1734]: time="2025-07-06T23:57:07.292572236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5647f696c4-hvltb,Uid:5065e550-3d85-474c-9080-cbf916c3e61c,Namespace:calico-system,Attempt:1,}" Jul 6 23:57:07.300062 systemd[1]: run-netns-cni\x2d353b2203\x2d967f\x2d135b\x2d41c2\x2d00fcbfc985b3.mount: Deactivated successfully. Jul 6 23:57:07.300186 systemd[1]: run-netns-cni\x2d59bbe21f\x2d8d27\x2d2fac\x2d4731\x2d00e487e87ff3.mount: Deactivated successfully. Jul 6 23:57:07.300260 systemd[1]: run-netns-cni\x2d6e2919a3\x2d924c\x2d359c\x2d01c9\x2d3fc5d5a37341.mount: Deactivated successfully. Jul 6 23:57:07.376153 kubelet[3188]: I0706 23:57:07.372592 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8bb5r" podStartSLOduration=41.372567036 podStartE2EDuration="41.372567036s" podCreationTimestamp="2025-07-06 23:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:57:07.370648514 +0000 UTC m=+47.440243975" watchObservedRunningTime="2025-07-06 23:57:07.372567036 +0000 UTC m=+47.442162497" Jul 6 23:57:07.542890 systemd-networkd[1350]: calic7885d3b3a9: Link UP Jul 6 23:57:07.543408 systemd-networkd[1350]: calic7885d3b3a9: Gained carrier Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.388 [INFO][5186] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0 calico-apiserver-6cbd67b8cf- calico-apiserver deec6d87-2985-4168-ab6f-d727fa58f2d2 967 0 2025-07-06 23:56:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cbd67b8cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-a-6a836f1a00 calico-apiserver-6cbd67b8cf-wpmx9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic7885d3b3a9 [] [] }} ContainerID="ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-wpmx9" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-" Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.389 [INFO][5186] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-wpmx9" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.459 [INFO][5202] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" HandleID="k8s-pod-network.ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.459 [INFO][5202] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" HandleID="k8s-pod-network.ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-a-6a836f1a00", "pod":"calico-apiserver-6cbd67b8cf-wpmx9", "timestamp":"2025-07-06 23:57:07.459101709 +0000 UTC"}, Hostname:"ci-4081.3.4-a-6a836f1a00", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.459 [INFO][5202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.459 [INFO][5202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.459 [INFO][5202] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-6a836f1a00' Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.477 [INFO][5202] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.487 [INFO][5202] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.493 [INFO][5202] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.495 [INFO][5202] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.500 [INFO][5202] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.500 [INFO][5202] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.503 [INFO][5202] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3 Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.519 [INFO][5202] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.532 [INFO][5202] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.5/26] block=192.168.92.0/26 handle="k8s-pod-network.ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.532 [INFO][5202] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.5/26] handle="k8s-pod-network.ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.535 [INFO][5202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:07.579266 containerd[1734]: 2025-07-06 23:57:07.535 [INFO][5202] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.5/26] IPv6=[] ContainerID="ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" HandleID="k8s-pod-network.ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:07.580859 containerd[1734]: 2025-07-06 23:57:07.537 [INFO][5186] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-wpmx9" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0", GenerateName:"calico-apiserver-6cbd67b8cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"deec6d87-2985-4168-ab6f-d727fa58f2d2", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbd67b8cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"", Pod:"calico-apiserver-6cbd67b8cf-wpmx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7885d3b3a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:07.580859 containerd[1734]: 2025-07-06 23:57:07.537 [INFO][5186] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.5/32] ContainerID="ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-wpmx9" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:07.580859 containerd[1734]: 2025-07-06 23:57:07.537 [INFO][5186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7885d3b3a9 ContainerID="ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-wpmx9" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:07.580859 containerd[1734]: 2025-07-06 23:57:07.541 [INFO][5186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-wpmx9" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:07.580859 containerd[1734]: 2025-07-06 23:57:07.541 [INFO][5186] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-wpmx9" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0", GenerateName:"calico-apiserver-6cbd67b8cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"deec6d87-2985-4168-ab6f-d727fa58f2d2", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbd67b8cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3", Pod:"calico-apiserver-6cbd67b8cf-wpmx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7885d3b3a9", MAC:"7e:4e:e9:ac:c7:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:07.580859 containerd[1734]: 2025-07-06 23:57:07.574 [INFO][5186] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbd67b8cf-wpmx9" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:07.623071 containerd[1734]: time="2025-07-06T23:57:07.622955851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:07.623398 containerd[1734]: time="2025-07-06T23:57:07.623268555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:07.623398 containerd[1734]: time="2025-07-06T23:57:07.623329356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:07.623889 containerd[1734]: time="2025-07-06T23:57:07.623700160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:07.676726 systemd[1]: Started cri-containerd-ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3.scope - libcontainer container ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3. Jul 6 23:57:07.683054 systemd-networkd[1350]: cali1ad5bd7e259: Link UP Jul 6 23:57:07.683350 systemd-networkd[1350]: cali1ad5bd7e259: Gained carrier Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.483 [INFO][5208] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0 calico-kube-controllers-5647f696c4- calico-system 5065e550-3d85-474c-9080-cbf916c3e61c 971 0 2025-07-06 23:56:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5647f696c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.4-a-6a836f1a00 calico-kube-controllers-5647f696c4-hvltb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1ad5bd7e259 [] [] }} ContainerID="f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" Namespace="calico-system" Pod="calico-kube-controllers-5647f696c4-hvltb" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-" Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.484 [INFO][5208] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" Namespace="calico-system" Pod="calico-kube-controllers-5647f696c4-hvltb" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.524 [INFO][5221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" HandleID="k8s-pod-network.f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.524 [INFO][5221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" HandleID="k8s-pod-network.f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-6a836f1a00", "pod":"calico-kube-controllers-5647f696c4-hvltb", "timestamp":"2025-07-06 23:57:07.524746447 +0000 UTC"}, Hostname:"ci-4081.3.4-a-6a836f1a00", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.525 [INFO][5221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.533 [INFO][5221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.534 [INFO][5221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-6a836f1a00' Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.598 [INFO][5221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.606 [INFO][5221] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.616 [INFO][5221] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.620 [INFO][5221] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.625 [INFO][5221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.625 [INFO][5221] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.635 [INFO][5221] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543 Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.645 [INFO][5221] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.660 [INFO][5221] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.6/26] block=192.168.92.0/26 handle="k8s-pod-network.f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.661 [INFO][5221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.6/26] handle="k8s-pod-network.f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.661 [INFO][5221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:07.718029 containerd[1734]: 2025-07-06 23:57:07.661 [INFO][5221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.6/26] IPv6=[] ContainerID="f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" HandleID="k8s-pod-network.f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:07.719159 containerd[1734]: 2025-07-06 23:57:07.663 [INFO][5208] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" Namespace="calico-system" Pod="calico-kube-controllers-5647f696c4-hvltb" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0", GenerateName:"calico-kube-controllers-5647f696c4-", Namespace:"calico-system", SelfLink:"", UID:"5065e550-3d85-474c-9080-cbf916c3e61c", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5647f696c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"", Pod:"calico-kube-controllers-5647f696c4-hvltb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ad5bd7e259", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:07.719159 containerd[1734]: 2025-07-06 23:57:07.664 [INFO][5208] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.6/32] ContainerID="f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" Namespace="calico-system" Pod="calico-kube-controllers-5647f696c4-hvltb" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:07.719159 containerd[1734]: 2025-07-06 23:57:07.664 [INFO][5208] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ad5bd7e259 ContainerID="f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" Namespace="calico-system" Pod="calico-kube-controllers-5647f696c4-hvltb" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:07.719159 containerd[1734]: 2025-07-06 23:57:07.684 [INFO][5208] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" Namespace="calico-system" Pod="calico-kube-controllers-5647f696c4-hvltb" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:07.719159 containerd[1734]: 2025-07-06 23:57:07.686 [INFO][5208] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" Namespace="calico-system" Pod="calico-kube-controllers-5647f696c4-hvltb" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0", GenerateName:"calico-kube-controllers-5647f696c4-", Namespace:"calico-system", SelfLink:"", UID:"5065e550-3d85-474c-9080-cbf916c3e61c", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5647f696c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543", Pod:"calico-kube-controllers-5647f696c4-hvltb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ad5bd7e259", MAC:"a6:27:f5:b4:6c:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:07.719159 containerd[1734]: 2025-07-06 23:57:07.711 [INFO][5208] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543" Namespace="calico-system" Pod="calico-kube-controllers-5647f696c4-hvltb" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:07.766143 containerd[1734]: time="2025-07-06T23:57:07.765975360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:07.767524 containerd[1734]: time="2025-07-06T23:57:07.766080061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:07.767524 containerd[1734]: time="2025-07-06T23:57:07.766108661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:07.767524 containerd[1734]: time="2025-07-06T23:57:07.766228862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:07.804828 containerd[1734]: time="2025-07-06T23:57:07.804313391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbd67b8cf-wpmx9,Uid:deec6d87-2985-4168-ab6f-d727fa58f2d2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3\"" Jul 6 23:57:07.821885 systemd[1]: Started cri-containerd-f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543.scope - libcontainer container f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543. Jul 6 23:57:07.886911 containerd[1734]: time="2025-07-06T23:57:07.886864119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5647f696c4-hvltb,Uid:5065e550-3d85-474c-9080-cbf916c3e61c,Namespace:calico-system,Attempt:1,} returns sandbox id \"f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543\"" Jul 6 23:57:08.044534 containerd[1734]: time="2025-07-06T23:57:08.043944885Z" level=info msg="StopPodSandbox for \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\"" Jul 6 23:57:08.059654 containerd[1734]: time="2025-07-06T23:57:08.059600361Z" level=info msg="StopPodSandbox for \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\"" Jul 6 23:57:08.157803 systemd-networkd[1350]: cali5631955af2e: Gained IPv6LL Jul 6 23:57:08.305029 containerd[1734]: 2025-07-06 23:57:08.202 [INFO][5348] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Jul 6 23:57:08.305029 containerd[1734]: 2025-07-06 23:57:08.203 [INFO][5348] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" iface="eth0" netns="/var/run/netns/cni-e29b22e2-497f-b364-c1d0-becadbe9f576" Jul 6 23:57:08.305029 containerd[1734]: 2025-07-06 23:57:08.206 [INFO][5348] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" iface="eth0" netns="/var/run/netns/cni-e29b22e2-497f-b364-c1d0-becadbe9f576" Jul 6 23:57:08.305029 containerd[1734]: 2025-07-06 23:57:08.206 [INFO][5348] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" iface="eth0" netns="/var/run/netns/cni-e29b22e2-497f-b364-c1d0-becadbe9f576" Jul 6 23:57:08.305029 containerd[1734]: 2025-07-06 23:57:08.206 [INFO][5348] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Jul 6 23:57:08.305029 containerd[1734]: 2025-07-06 23:57:08.207 [INFO][5348] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Jul 6 23:57:08.305029 containerd[1734]: 2025-07-06 23:57:08.281 [INFO][5362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" HandleID="k8s-pod-network.1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Workload="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:08.305029 containerd[1734]: 2025-07-06 23:57:08.282 [INFO][5362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:08.305029 containerd[1734]: 2025-07-06 23:57:08.282 [INFO][5362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:08.305029 containerd[1734]: 2025-07-06 23:57:08.298 [WARNING][5362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" HandleID="k8s-pod-network.1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Workload="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:08.305029 containerd[1734]: 2025-07-06 23:57:08.298 [INFO][5362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" HandleID="k8s-pod-network.1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Workload="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:08.305029 containerd[1734]: 2025-07-06 23:57:08.299 [INFO][5362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:08.305029 containerd[1734]: 2025-07-06 23:57:08.303 [INFO][5348] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Jul 6 23:57:08.308156 containerd[1734]: time="2025-07-06T23:57:08.307565249Z" level=info msg="TearDown network for sandbox \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\" successfully" Jul 6 23:57:08.308156 containerd[1734]: time="2025-07-06T23:57:08.307607450Z" level=info msg="StopPodSandbox for \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\" returns successfully" Jul 6 23:57:08.309498 containerd[1734]: time="2025-07-06T23:57:08.308675062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kl5jd,Uid:5edfa730-d45f-46ca-a7ed-ef497ff8b782,Namespace:calico-system,Attempt:1,}" Jul 6 23:57:08.313507 systemd[1]: run-netns-cni\x2de29b22e2\x2d497f\x2db364\x2dc1d0\x2dbecadbe9f576.mount: Deactivated successfully. Jul 6 23:57:08.340270 containerd[1734]: 2025-07-06 23:57:08.203 [INFO][5347] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Jul 6 23:57:08.340270 containerd[1734]: 2025-07-06 23:57:08.203 [INFO][5347] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" iface="eth0" netns="/var/run/netns/cni-0b159e8b-b4fc-1f0b-fda3-1f0221112b5c" Jul 6 23:57:08.340270 containerd[1734]: 2025-07-06 23:57:08.205 [INFO][5347] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" iface="eth0" netns="/var/run/netns/cni-0b159e8b-b4fc-1f0b-fda3-1f0221112b5c" Jul 6 23:57:08.340270 containerd[1734]: 2025-07-06 23:57:08.205 [INFO][5347] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" iface="eth0" netns="/var/run/netns/cni-0b159e8b-b4fc-1f0b-fda3-1f0221112b5c" Jul 6 23:57:08.340270 containerd[1734]: 2025-07-06 23:57:08.205 [INFO][5347] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Jul 6 23:57:08.340270 containerd[1734]: 2025-07-06 23:57:08.206 [INFO][5347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Jul 6 23:57:08.340270 containerd[1734]: 2025-07-06 23:57:08.283 [INFO][5360] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" HandleID="k8s-pod-network.d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:08.340270 containerd[1734]: 2025-07-06 23:57:08.284 [INFO][5360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:08.340270 containerd[1734]: 2025-07-06 23:57:08.299 [INFO][5360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:08.340270 containerd[1734]: 2025-07-06 23:57:08.322 [WARNING][5360] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" HandleID="k8s-pod-network.d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:08.340270 containerd[1734]: 2025-07-06 23:57:08.323 [INFO][5360] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" HandleID="k8s-pod-network.d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:08.340270 containerd[1734]: 2025-07-06 23:57:08.328 [INFO][5360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:08.340270 containerd[1734]: 2025-07-06 23:57:08.334 [INFO][5347] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Jul 6 23:57:08.344264 containerd[1734]: time="2025-07-06T23:57:08.341214428Z" level=info msg="TearDown network for sandbox \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\" successfully" Jul 6 23:57:08.344264 containerd[1734]: time="2025-07-06T23:57:08.343552354Z" level=info msg="StopPodSandbox for \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\" returns successfully" Jul 6 23:57:08.346865 containerd[1734]: time="2025-07-06T23:57:08.345106071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qrbvq,Uid:68b343a0-0dad-4108-a9c1-a80798ac4e3d,Namespace:kube-system,Attempt:1,}" Jul 6 23:57:08.346767 systemd[1]: run-netns-cni\x2d0b159e8b\x2db4fc\x2d1f0b\x2dfda3\x2d1f0221112b5c.mount: Deactivated successfully. Jul 6 23:57:08.542101 systemd-networkd[1350]: cali0b5e1c78d4d: Gained IPv6LL Jul 6 23:57:08.736270 systemd-networkd[1350]: cali0a8c920207e: Gained IPv6LL Jul 6 23:57:09.056735 systemd-networkd[1350]: cali1a77e00d823: Link UP Jul 6 23:57:09.058885 systemd-networkd[1350]: cali1a77e00d823: Gained carrier Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:08.894 [INFO][5384] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0 goldmane-58fd7646b9- calico-system 5edfa730-d45f-46ca-a7ed-ef497ff8b782 988 0 2025-07-06 23:56:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.4-a-6a836f1a00 goldmane-58fd7646b9-kl5jd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1a77e00d823 [] [] }} ContainerID="4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl5jd" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-" Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:08.894 [INFO][5384] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl5jd" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:08.979 [INFO][5407] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" HandleID="k8s-pod-network.4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" Workload="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:08.979 [INFO][5407] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" HandleID="k8s-pod-network.4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" Workload="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037df30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-6a836f1a00", "pod":"goldmane-58fd7646b9-kl5jd", "timestamp":"2025-07-06 23:57:08.979495405 +0000 UTC"}, Hostname:"ci-4081.3.4-a-6a836f1a00", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:08.979 [INFO][5407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:08.979 [INFO][5407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:08.980 [INFO][5407] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-6a836f1a00' Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:08.991 [INFO][5407] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:08.997 [INFO][5407] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:09.003 [INFO][5407] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:09.007 [INFO][5407] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:09.013 [INFO][5407] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:09.014 [INFO][5407] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:09.018 [INFO][5407] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:09.029 [INFO][5407] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:09.043 [INFO][5407] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.7/26] block=192.168.92.0/26 handle="k8s-pod-network.4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:09.043 [INFO][5407] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.7/26] handle="k8s-pod-network.4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:09.044 [INFO][5407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:09.088079 containerd[1734]: 2025-07-06 23:57:09.044 [INFO][5407] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.7/26] IPv6=[] ContainerID="4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" HandleID="k8s-pod-network.4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" Workload="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:09.090655 containerd[1734]: 2025-07-06 23:57:09.047 [INFO][5384] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl5jd" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5edfa730-d45f-46ca-a7ed-ef497ff8b782", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"", Pod:"goldmane-58fd7646b9-kl5jd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.92.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1a77e00d823", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:09.090655 containerd[1734]: 2025-07-06 23:57:09.048 [INFO][5384] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.7/32] ContainerID="4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl5jd" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:09.090655 containerd[1734]: 2025-07-06 23:57:09.048 [INFO][5384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a77e00d823 ContainerID="4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl5jd" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:09.090655 containerd[1734]: 2025-07-06 23:57:09.057 [INFO][5384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl5jd" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:09.090655 containerd[1734]: 2025-07-06 23:57:09.057 [INFO][5384] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl5jd" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5edfa730-d45f-46ca-a7ed-ef497ff8b782", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca", Pod:"goldmane-58fd7646b9-kl5jd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.92.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1a77e00d823", MAC:"7e:f9:58:1e:7f:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:09.090655 containerd[1734]: 2025-07-06 23:57:09.085 [INFO][5384] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl5jd" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:09.188623 containerd[1734]: time="2025-07-06T23:57:09.183732701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:09.188623 containerd[1734]: time="2025-07-06T23:57:09.184456509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:09.188623 containerd[1734]: time="2025-07-06T23:57:09.184840114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:09.188623 containerd[1734]: time="2025-07-06T23:57:09.185739624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:09.186541 systemd-networkd[1350]: cali8e599d484e6: Link UP Jul 6 23:57:09.186872 systemd-networkd[1350]: cali8e599d484e6: Gained carrier Jul 6 23:57:09.230908 systemd[1]: Started cri-containerd-4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca.scope - libcontainer container 4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca. Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:08.948 [INFO][5393] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0 coredns-7c65d6cfc9- kube-system 68b343a0-0dad-4108-a9c1-a80798ac4e3d 989 0 2025-07-06 23:56:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-a-6a836f1a00 coredns-7c65d6cfc9-qrbvq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8e599d484e6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qrbvq" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-" Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:08.948 [INFO][5393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qrbvq" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.032 [INFO][5413] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" HandleID="k8s-pod-network.7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.033 [INFO][5413] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" HandleID="k8s-pod-network.7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d8710), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-a-6a836f1a00", "pod":"coredns-7c65d6cfc9-qrbvq", "timestamp":"2025-07-06 23:57:09.032618802 +0000 UTC"}, Hostname:"ci-4081.3.4-a-6a836f1a00", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.033 [INFO][5413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.044 [INFO][5413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.044 [INFO][5413] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-6a836f1a00' Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.093 [INFO][5413] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.101 [INFO][5413] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.110 [INFO][5413] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.112 [INFO][5413] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.116 [INFO][5413] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.116 [INFO][5413] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.118 [INFO][5413] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.132 [INFO][5413] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.161 [INFO][5413] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.8/26] block=192.168.92.0/26 handle="k8s-pod-network.7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.161 [INFO][5413] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.8/26] handle="k8s-pod-network.7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" host="ci-4081.3.4-a-6a836f1a00" Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.161 [INFO][5413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:09.246247 containerd[1734]: 2025-07-06 23:57:09.161 [INFO][5413] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.8/26] IPv6=[] ContainerID="7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" HandleID="k8s-pod-network.7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:09.247238 containerd[1734]: 2025-07-06 23:57:09.169 [INFO][5393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qrbvq" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"68b343a0-0dad-4108-a9c1-a80798ac4e3d", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"", Pod:"coredns-7c65d6cfc9-qrbvq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e599d484e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:09.247238 containerd[1734]: 2025-07-06 23:57:09.169 [INFO][5393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.8/32] ContainerID="7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qrbvq" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:09.247238 containerd[1734]: 2025-07-06 23:57:09.170 [INFO][5393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e599d484e6 ContainerID="7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qrbvq" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:09.247238 containerd[1734]: 2025-07-06 23:57:09.186 [INFO][5393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qrbvq" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:09.247238 containerd[1734]: 2025-07-06 23:57:09.188 [INFO][5393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qrbvq" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"68b343a0-0dad-4108-a9c1-a80798ac4e3d", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca", Pod:"coredns-7c65d6cfc9-qrbvq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e599d484e6", MAC:"fa:ee:ce:49:94:ea", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:09.247238 containerd[1734]: 2025-07-06 23:57:09.242 [INFO][5393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qrbvq" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:09.324931 containerd[1734]: time="2025-07-06T23:57:09.324815788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:09.324931 containerd[1734]: time="2025-07-06T23:57:09.324891689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:09.326275 containerd[1734]: time="2025-07-06T23:57:09.324921489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:09.326275 containerd[1734]: time="2025-07-06T23:57:09.325062390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:09.395749 systemd[1]: run-containerd-runc-k8s.io-7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca-runc.gSNIWC.mount: Deactivated successfully. Jul 6 23:57:09.406716 systemd[1]: Started cri-containerd-7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca.scope - libcontainer container 7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca. Jul 6 23:57:09.546534 containerd[1734]: time="2025-07-06T23:57:09.546068776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qrbvq,Uid:68b343a0-0dad-4108-a9c1-a80798ac4e3d,Namespace:kube-system,Attempt:1,} returns sandbox id \"7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca\"" Jul 6 23:57:09.555591 containerd[1734]: time="2025-07-06T23:57:09.555062977Z" level=info msg="CreateContainer within sandbox \"7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:57:09.566612 systemd-networkd[1350]: calic7885d3b3a9: Gained IPv6LL Jul 6 23:57:09.625160 containerd[1734]: time="2025-07-06T23:57:09.624765960Z" level=info msg="CreateContainer within sandbox \"7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"647362415d9fdd5b4577f78341bebf8a3691d53ba7c5f422e60c8a26e0735276\"" Jul 6 23:57:09.628062 containerd[1734]: time="2025-07-06T23:57:09.626609881Z" level=info msg="StartContainer for \"647362415d9fdd5b4577f78341bebf8a3691d53ba7c5f422e60c8a26e0735276\"" Jul 6 23:57:09.694028 systemd-networkd[1350]: cali1ad5bd7e259: Gained IPv6LL Jul 6 23:57:09.708544 containerd[1734]: time="2025-07-06T23:57:09.708283000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kl5jd,Uid:5edfa730-d45f-46ca-a7ed-ef497ff8b782,Namespace:calico-system,Attempt:1,} returns sandbox id \"4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca\"" Jul 6 23:57:09.736911 systemd[1]: Started cri-containerd-647362415d9fdd5b4577f78341bebf8a3691d53ba7c5f422e60c8a26e0735276.scope - libcontainer container 647362415d9fdd5b4577f78341bebf8a3691d53ba7c5f422e60c8a26e0735276. Jul 6 23:57:09.811888 containerd[1734]: time="2025-07-06T23:57:09.811737363Z" level=info msg="StartContainer for \"647362415d9fdd5b4577f78341bebf8a3691d53ba7c5f422e60c8a26e0735276\" returns successfully" Jul 6 23:57:09.956349 kubelet[3188]: I0706 23:57:09.955395 3188 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:57:10.269756 systemd-networkd[1350]: cali1a77e00d823: Gained IPv6LL Jul 6 23:57:10.407426 kubelet[3188]: I0706 23:57:10.407248 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qrbvq" podStartSLOduration=44.407224759 podStartE2EDuration="44.407224759s" podCreationTimestamp="2025-07-06 23:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:57:10.406352449 +0000 UTC m=+50.475947910" watchObservedRunningTime="2025-07-06 23:57:10.407224759 +0000 UTC m=+50.476820220" Jul 6 23:57:10.590270 systemd-networkd[1350]: cali8e599d484e6: Gained IPv6LL Jul 6 23:57:10.842774 containerd[1734]: time="2025-07-06T23:57:10.842381152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:10.844860 containerd[1734]: time="2025-07-06T23:57:10.844782679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 6 23:57:10.850873 containerd[1734]: time="2025-07-06T23:57:10.850794046Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:10.854932 containerd[1734]: time="2025-07-06T23:57:10.854845092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:10.855901 containerd[1734]: time="2025-07-06T23:57:10.855745502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.103158138s" Jul 6 23:57:10.855901 containerd[1734]: time="2025-07-06T23:57:10.855788903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:57:10.859381 containerd[1734]: time="2025-07-06T23:57:10.859091940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:57:10.860669 containerd[1734]: time="2025-07-06T23:57:10.860622657Z" level=info msg="CreateContainer within sandbox \"4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:57:10.908807 containerd[1734]: time="2025-07-06T23:57:10.908752792Z" level=info msg="CreateContainer within sandbox \"4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0d4fd352cb5be8290f046fbc8b19c3c861b4a6a1d3ec5607cdcffb6fabd6c75b\"" Jul 6 23:57:10.911222 containerd[1734]: time="2025-07-06T23:57:10.909891104Z" level=info msg="StartContainer for \"0d4fd352cb5be8290f046fbc8b19c3c861b4a6a1d3ec5607cdcffb6fabd6c75b\"" Jul 6 23:57:10.962646 systemd[1]: Started cri-containerd-0d4fd352cb5be8290f046fbc8b19c3c861b4a6a1d3ec5607cdcffb6fabd6c75b.scope - libcontainer container 0d4fd352cb5be8290f046fbc8b19c3c861b4a6a1d3ec5607cdcffb6fabd6c75b. Jul 6 23:57:11.012559 containerd[1734]: time="2025-07-06T23:57:11.012509187Z" level=info msg="StartContainer for \"0d4fd352cb5be8290f046fbc8b19c3c861b4a6a1d3ec5607cdcffb6fabd6c75b\" returns successfully" Jul 6 23:57:11.406280 kubelet[3188]: I0706 23:57:11.406201 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cbd67b8cf-chfgm" podStartSLOduration=30.300367276 podStartE2EDuration="34.406175843s" podCreationTimestamp="2025-07-06 23:56:37 +0000 UTC" firstStartedPulling="2025-07-06 23:57:06.751013647 +0000 UTC m=+46.820609108" lastFinishedPulling="2025-07-06 23:57:10.856822214 +0000 UTC m=+50.926417675" observedRunningTime="2025-07-06 23:57:11.404334624 +0000 UTC m=+51.473930185" watchObservedRunningTime="2025-07-06 23:57:11.406175843 +0000 UTC m=+51.475771404" Jul 6 23:57:12.377006 containerd[1734]: time="2025-07-06T23:57:12.376942590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:12.379655 containerd[1734]: time="2025-07-06T23:57:12.379592918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 6 23:57:12.386135 containerd[1734]: time="2025-07-06T23:57:12.386086487Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:12.387501 kubelet[3188]: I0706 23:57:12.386763 3188 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:57:12.393368 containerd[1734]: time="2025-07-06T23:57:12.393316263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:12.394303 containerd[1734]: time="2025-07-06T23:57:12.394248473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.53487623s" Jul 6 23:57:12.394402 containerd[1734]: time="2025-07-06T23:57:12.394307374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 6 23:57:12.398939 containerd[1734]: time="2025-07-06T23:57:12.398576819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:57:12.400732 containerd[1734]: time="2025-07-06T23:57:12.400695241Z" level=info msg="CreateContainer within sandbox \"6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:57:12.451661 containerd[1734]: time="2025-07-06T23:57:12.451608579Z" level=info msg="CreateContainer within sandbox \"6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3ac695bcd7785cfc28b84341a39f5bb5e28bd91b52b6112395409a587852965f\"" Jul 6 23:57:12.452683 containerd[1734]: time="2025-07-06T23:57:12.452505888Z" level=info msg="StartContainer for \"3ac695bcd7785cfc28b84341a39f5bb5e28bd91b52b6112395409a587852965f\"" Jul 6 23:57:12.517669 systemd[1]: Started cri-containerd-3ac695bcd7785cfc28b84341a39f5bb5e28bd91b52b6112395409a587852965f.scope - libcontainer container 3ac695bcd7785cfc28b84341a39f5bb5e28bd91b52b6112395409a587852965f. Jul 6 23:57:12.611525 containerd[1734]: time="2025-07-06T23:57:12.610839759Z" level=info msg="StartContainer for \"3ac695bcd7785cfc28b84341a39f5bb5e28bd91b52b6112395409a587852965f\" returns successfully" Jul 6 23:57:12.729552 containerd[1734]: time="2025-07-06T23:57:12.729385611Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:12.732376 containerd[1734]: time="2025-07-06T23:57:12.732319842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:57:12.736328 containerd[1734]: time="2025-07-06T23:57:12.736203583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 337.585764ms" Jul 6 23:57:12.736328 containerd[1734]: time="2025-07-06T23:57:12.736262783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:57:12.738013 containerd[1734]: time="2025-07-06T23:57:12.737983601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:57:12.740335 containerd[1734]: time="2025-07-06T23:57:12.740267726Z" level=info msg="CreateContainer within sandbox \"ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:57:12.790684 containerd[1734]: time="2025-07-06T23:57:12.790636657Z" level=info msg="CreateContainer within sandbox \"ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"16b32d084ad936865bcefa0d7b2974a5e60ded9f36328c9c71110392c6166d9c\"" Jul 6 23:57:12.792142 containerd[1734]: time="2025-07-06T23:57:12.791380965Z" level=info msg="StartContainer for \"16b32d084ad936865bcefa0d7b2974a5e60ded9f36328c9c71110392c6166d9c\"" Jul 6 23:57:12.848932 systemd[1]: Started cri-containerd-16b32d084ad936865bcefa0d7b2974a5e60ded9f36328c9c71110392c6166d9c.scope - libcontainer container 16b32d084ad936865bcefa0d7b2974a5e60ded9f36328c9c71110392c6166d9c. Jul 6 23:57:12.944274 containerd[1734]: time="2025-07-06T23:57:12.944224879Z" level=info msg="StartContainer for \"16b32d084ad936865bcefa0d7b2974a5e60ded9f36328c9c71110392c6166d9c\" returns successfully" Jul 6 23:57:13.438495 kubelet[3188]: I0706 23:57:13.436715 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cbd67b8cf-wpmx9" podStartSLOduration=31.507393315 podStartE2EDuration="36.436692177s" podCreationTimestamp="2025-07-06 23:56:37 +0000 UTC" firstStartedPulling="2025-07-06 23:57:07.807887031 +0000 UTC m=+47.877482592" lastFinishedPulling="2025-07-06 23:57:12.737185993 +0000 UTC m=+52.806781454" observedRunningTime="2025-07-06 23:57:13.435364963 +0000 UTC m=+53.504960424" watchObservedRunningTime="2025-07-06 23:57:13.436692177 +0000 UTC m=+53.506287738" Jul 6 23:57:15.410119 kubelet[3188]: I0706 23:57:15.410078 3188 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:57:16.459632 containerd[1734]: time="2025-07-06T23:57:16.459579386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:16.464588 containerd[1734]: time="2025-07-06T23:57:16.464529439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 6 23:57:16.473904 containerd[1734]: time="2025-07-06T23:57:16.473844937Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:16.480098 containerd[1734]: time="2025-07-06T23:57:16.480031702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.7420086s" Jul 6 23:57:16.480098 containerd[1734]: time="2025-07-06T23:57:16.480097903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 6 23:57:16.480772 containerd[1734]: time="2025-07-06T23:57:16.480728610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:16.483348 containerd[1734]: time="2025-07-06T23:57:16.483314437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:57:16.502834 containerd[1734]: time="2025-07-06T23:57:16.502788443Z" level=info msg="CreateContainer within sandbox \"f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:57:16.545401 containerd[1734]: time="2025-07-06T23:57:16.545335892Z" level=info msg="CreateContainer within sandbox \"f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8461ef3509943c754fd49807069a565616aa2a441a4fad561eea47e2476d9460\"" Jul 6 23:57:16.546787 containerd[1734]: time="2025-07-06T23:57:16.546752307Z" level=info msg="StartContainer for \"8461ef3509943c754fd49807069a565616aa2a441a4fad561eea47e2476d9460\"" Jul 6 23:57:16.601568 systemd[1]: Started cri-containerd-8461ef3509943c754fd49807069a565616aa2a441a4fad561eea47e2476d9460.scope - libcontainer container 8461ef3509943c754fd49807069a565616aa2a441a4fad561eea47e2476d9460. Jul 6 23:57:16.686148 containerd[1734]: time="2025-07-06T23:57:16.686096478Z" level=info msg="StartContainer for \"8461ef3509943c754fd49807069a565616aa2a441a4fad561eea47e2476d9460\" returns successfully" Jul 6 23:57:17.446123 kubelet[3188]: I0706 23:57:17.445314 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5647f696c4-hvltb" podStartSLOduration=28.851514703 podStartE2EDuration="37.445291992s" podCreationTimestamp="2025-07-06 23:56:40 +0000 UTC" firstStartedPulling="2025-07-06 23:57:07.888931942 +0000 UTC m=+47.958527503" lastFinishedPulling="2025-07-06 23:57:16.482709231 +0000 UTC m=+56.552304792" observedRunningTime="2025-07-06 23:57:17.444260381 +0000 UTC m=+57.513855942" watchObservedRunningTime="2025-07-06 23:57:17.445291992 +0000 UTC m=+57.514887553" Jul 6 23:57:19.744569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2991028036.mount: Deactivated successfully. Jul 6 23:57:20.216802 containerd[1734]: time="2025-07-06T23:57:20.052686048Z" level=info msg="StopPodSandbox for \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\"" Jul 6 23:57:20.285138 containerd[1734]: 2025-07-06 23:57:20.142 [WARNING][5839] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0", GenerateName:"calico-apiserver-6cbd67b8cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"deec6d87-2985-4168-ab6f-d727fa58f2d2", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbd67b8cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3", Pod:"calico-apiserver-6cbd67b8cf-wpmx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7885d3b3a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:20.285138 containerd[1734]: 2025-07-06 23:57:20.218 [INFO][5839] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Jul 6 23:57:20.285138 containerd[1734]: 2025-07-06 23:57:20.218 [INFO][5839] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" iface="eth0" netns="" Jul 6 23:57:20.285138 containerd[1734]: 2025-07-06 23:57:20.218 [INFO][5839] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Jul 6 23:57:20.285138 containerd[1734]: 2025-07-06 23:57:20.218 [INFO][5839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Jul 6 23:57:20.285138 containerd[1734]: 2025-07-06 23:57:20.253 [INFO][5846] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" HandleID="k8s-pod-network.765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:20.285138 containerd[1734]: 2025-07-06 23:57:20.254 [INFO][5846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:20.285138 containerd[1734]: 2025-07-06 23:57:20.254 [INFO][5846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:20.285138 containerd[1734]: 2025-07-06 23:57:20.267 [WARNING][5846] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" HandleID="k8s-pod-network.765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:20.285138 containerd[1734]: 2025-07-06 23:57:20.267 [INFO][5846] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" HandleID="k8s-pod-network.765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:20.285138 containerd[1734]: 2025-07-06 23:57:20.271 [INFO][5846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:20.285138 containerd[1734]: 2025-07-06 23:57:20.278 [INFO][5839] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Jul 6 23:57:20.287340 containerd[1734]: time="2025-07-06T23:57:20.285147147Z" level=info msg="TearDown network for sandbox \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\" successfully" Jul 6 23:57:20.287340 containerd[1734]: time="2025-07-06T23:57:20.285179548Z" level=info msg="StopPodSandbox for \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\" returns successfully" Jul 6 23:57:20.290427 containerd[1734]: time="2025-07-06T23:57:20.288755385Z" level=info msg="RemovePodSandbox for \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\"" Jul 6 23:57:20.290427 containerd[1734]: time="2025-07-06T23:57:20.288815585Z" level=info msg="Forcibly stopping sandbox \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\"" Jul 6 23:57:20.518572 containerd[1734]: 2025-07-06 23:57:20.449 [WARNING][5861] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0", GenerateName:"calico-apiserver-6cbd67b8cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"deec6d87-2985-4168-ab6f-d727fa58f2d2", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbd67b8cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"ae224d053b877ef5a8abb130833882e9a7637f31ce201e40305ba2176bb7f4e3", Pod:"calico-apiserver-6cbd67b8cf-wpmx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7885d3b3a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:20.518572 containerd[1734]: 2025-07-06 23:57:20.449 [INFO][5861] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Jul 6 23:57:20.518572 containerd[1734]: 2025-07-06 23:57:20.449 [INFO][5861] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" iface="eth0" netns="" Jul 6 23:57:20.518572 containerd[1734]: 2025-07-06 23:57:20.449 [INFO][5861] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Jul 6 23:57:20.518572 containerd[1734]: 2025-07-06 23:57:20.449 [INFO][5861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Jul 6 23:57:20.518572 containerd[1734]: 2025-07-06 23:57:20.502 [INFO][5872] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" HandleID="k8s-pod-network.765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:20.518572 containerd[1734]: 2025-07-06 23:57:20.502 [INFO][5872] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:20.518572 containerd[1734]: 2025-07-06 23:57:20.503 [INFO][5872] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:20.518572 containerd[1734]: 2025-07-06 23:57:20.512 [WARNING][5872] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" HandleID="k8s-pod-network.765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:20.518572 containerd[1734]: 2025-07-06 23:57:20.512 [INFO][5872] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" HandleID="k8s-pod-network.765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--wpmx9-eth0" Jul 6 23:57:20.518572 containerd[1734]: 2025-07-06 23:57:20.515 [INFO][5872] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:20.518572 containerd[1734]: 2025-07-06 23:57:20.516 [INFO][5861] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5" Jul 6 23:57:20.520203 containerd[1734]: time="2025-07-06T23:57:20.519537767Z" level=info msg="TearDown network for sandbox \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\" successfully" Jul 6 23:57:23.172543 containerd[1734]: time="2025-07-06T23:57:23.171742642Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:23.172543 containerd[1734]: time="2025-07-06T23:57:23.171848443Z" level=info msg="RemovePodSandbox \"765ffe098ac0b4c93691713ade29d7c8eab95fb70483a3c81401d96ea07eceb5\" returns successfully" Jul 6 23:57:23.175100 containerd[1734]: time="2025-07-06T23:57:23.174558071Z" level=info msg="StopPodSandbox for \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\"" Jul 6 23:57:23.238082 containerd[1734]: time="2025-07-06T23:57:23.238003626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:23.256821 containerd[1734]: time="2025-07-06T23:57:23.256530117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 6 23:57:23.261264 containerd[1734]: time="2025-07-06T23:57:23.261217666Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:23.267410 containerd[1734]: time="2025-07-06T23:57:23.267351029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:23.268512 containerd[1734]: time="2025-07-06T23:57:23.268452041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 6.785098703s" Jul 6 23:57:23.268735 containerd[1734]: time="2025-07-06T23:57:23.268518241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 6 23:57:23.271612 containerd[1734]: time="2025-07-06T23:57:23.270821665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:57:23.272749 containerd[1734]: time="2025-07-06T23:57:23.272156579Z" level=info msg="CreateContainer within sandbox \"4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:57:23.322784 containerd[1734]: time="2025-07-06T23:57:23.322732001Z" level=info msg="CreateContainer within sandbox \"4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1e2cfa6d8eb2f49efa98f878e42285df1020de33f34511338179683397067f89\"" Jul 6 23:57:23.325352 containerd[1734]: time="2025-07-06T23:57:23.325244927Z" level=info msg="StartContainer for \"1e2cfa6d8eb2f49efa98f878e42285df1020de33f34511338179683397067f89\"" Jul 6 23:57:23.385603 systemd[1]: Started cri-containerd-1e2cfa6d8eb2f49efa98f878e42285df1020de33f34511338179683397067f89.scope - libcontainer container 1e2cfa6d8eb2f49efa98f878e42285df1020de33f34511338179683397067f89. Jul 6 23:57:23.405106 containerd[1734]: 2025-07-06 23:57:23.285 [WARNING][5886] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-whisker--658968477d--kls7p-eth0" Jul 6 23:57:23.405106 containerd[1734]: 2025-07-06 23:57:23.285 [INFO][5886] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Jul 6 23:57:23.405106 containerd[1734]: 2025-07-06 23:57:23.286 [INFO][5886] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" iface="eth0" netns="" Jul 6 23:57:23.405106 containerd[1734]: 2025-07-06 23:57:23.286 [INFO][5886] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Jul 6 23:57:23.405106 containerd[1734]: 2025-07-06 23:57:23.286 [INFO][5886] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Jul 6 23:57:23.405106 containerd[1734]: 2025-07-06 23:57:23.369 [INFO][5915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" HandleID="k8s-pod-network.5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-whisker--658968477d--kls7p-eth0" Jul 6 23:57:23.405106 containerd[1734]: 2025-07-06 23:57:23.370 [INFO][5915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:23.405106 containerd[1734]: 2025-07-06 23:57:23.370 [INFO][5915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:23.405106 containerd[1734]: 2025-07-06 23:57:23.390 [WARNING][5915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" HandleID="k8s-pod-network.5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-whisker--658968477d--kls7p-eth0" Jul 6 23:57:23.405106 containerd[1734]: 2025-07-06 23:57:23.390 [INFO][5915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" HandleID="k8s-pod-network.5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-whisker--658968477d--kls7p-eth0" Jul 6 23:57:23.405106 containerd[1734]: 2025-07-06 23:57:23.397 [INFO][5915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:23.405106 containerd[1734]: 2025-07-06 23:57:23.401 [INFO][5886] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Jul 6 23:57:23.406016 containerd[1734]: time="2025-07-06T23:57:23.405570156Z" level=info msg="TearDown network for sandbox \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\" successfully" Jul 6 23:57:23.406016 containerd[1734]: time="2025-07-06T23:57:23.405605756Z" level=info msg="StopPodSandbox for \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\" returns successfully" Jul 6 23:57:23.406738 containerd[1734]: time="2025-07-06T23:57:23.406702968Z" level=info msg="RemovePodSandbox for \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\"" Jul 6 23:57:23.407588 containerd[1734]: time="2025-07-06T23:57:23.407545576Z" level=info msg="Forcibly stopping sandbox \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\"" Jul 6 23:57:23.505396 containerd[1734]: time="2025-07-06T23:57:23.504814480Z" level=info msg="StartContainer for \"1e2cfa6d8eb2f49efa98f878e42285df1020de33f34511338179683397067f89\" returns successfully" Jul 6 23:57:23.540544 containerd[1734]: 2025-07-06 23:57:23.484 [WARNING][5957] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" WorkloadEndpoint="ci--4081.3.4--a--6a836f1a00-k8s-whisker--658968477d--kls7p-eth0" Jul 6 23:57:23.540544 containerd[1734]: 2025-07-06 23:57:23.484 [INFO][5957] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Jul 6 23:57:23.540544 containerd[1734]: 2025-07-06 23:57:23.484 [INFO][5957] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" iface="eth0" netns="" Jul 6 23:57:23.540544 containerd[1734]: 2025-07-06 23:57:23.484 [INFO][5957] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Jul 6 23:57:23.540544 containerd[1734]: 2025-07-06 23:57:23.484 [INFO][5957] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Jul 6 23:57:23.540544 containerd[1734]: 2025-07-06 23:57:23.525 [INFO][5967] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" HandleID="k8s-pod-network.5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-whisker--658968477d--kls7p-eth0" Jul 6 23:57:23.540544 containerd[1734]: 2025-07-06 23:57:23.525 [INFO][5967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:23.540544 containerd[1734]: 2025-07-06 23:57:23.525 [INFO][5967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:23.540544 containerd[1734]: 2025-07-06 23:57:23.533 [WARNING][5967] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" HandleID="k8s-pod-network.5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-whisker--658968477d--kls7p-eth0" Jul 6 23:57:23.540544 containerd[1734]: 2025-07-06 23:57:23.533 [INFO][5967] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" HandleID="k8s-pod-network.5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-whisker--658968477d--kls7p-eth0" Jul 6 23:57:23.540544 containerd[1734]: 2025-07-06 23:57:23.535 [INFO][5967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:23.540544 containerd[1734]: 2025-07-06 23:57:23.538 [INFO][5957] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e" Jul 6 23:57:23.540544 containerd[1734]: time="2025-07-06T23:57:23.540398148Z" level=info msg="TearDown network for sandbox \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\" successfully" Jul 6 23:57:23.554283 containerd[1734]: time="2025-07-06T23:57:23.553789686Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:23.554283 containerd[1734]: time="2025-07-06T23:57:23.553881587Z" level=info msg="RemovePodSandbox \"5f964b49c6c8e2451e44b53e1f2b062f3deb581b8115e939e2ccf65e79e3079e\" returns successfully" Jul 6 23:57:23.555194 containerd[1734]: time="2025-07-06T23:57:23.555159600Z" level=info msg="StopPodSandbox for \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\"" Jul 6 23:57:23.697829 containerd[1734]: 2025-07-06 23:57:23.627 [WARNING][5989] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"68b343a0-0dad-4108-a9c1-a80798ac4e3d", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca", Pod:"coredns-7c65d6cfc9-qrbvq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e599d484e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:23.697829 containerd[1734]: 2025-07-06 23:57:23.628 [INFO][5989] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Jul 6 23:57:23.697829 containerd[1734]: 2025-07-06 23:57:23.628 [INFO][5989] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" iface="eth0" netns="" Jul 6 23:57:23.697829 containerd[1734]: 2025-07-06 23:57:23.628 [INFO][5989] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Jul 6 23:57:23.697829 containerd[1734]: 2025-07-06 23:57:23.628 [INFO][5989] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Jul 6 23:57:23.697829 containerd[1734]: 2025-07-06 23:57:23.679 [INFO][5996] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" HandleID="k8s-pod-network.d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:23.697829 containerd[1734]: 2025-07-06 23:57:23.680 [INFO][5996] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:23.697829 containerd[1734]: 2025-07-06 23:57:23.680 [INFO][5996] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:23.697829 containerd[1734]: 2025-07-06 23:57:23.691 [WARNING][5996] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" HandleID="k8s-pod-network.d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:23.697829 containerd[1734]: 2025-07-06 23:57:23.691 [INFO][5996] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" HandleID="k8s-pod-network.d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:23.697829 containerd[1734]: 2025-07-06 23:57:23.693 [INFO][5996] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:23.697829 containerd[1734]: 2025-07-06 23:57:23.694 [INFO][5989] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Jul 6 23:57:23.697829 containerd[1734]: time="2025-07-06T23:57:23.697695571Z" level=info msg="TearDown network for sandbox \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\" successfully" Jul 6 23:57:23.697829 containerd[1734]: time="2025-07-06T23:57:23.697724271Z" level=info msg="StopPodSandbox for \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\" returns successfully" Jul 6 23:57:23.699595 containerd[1734]: time="2025-07-06T23:57:23.699073085Z" level=info msg="RemovePodSandbox for \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\"" Jul 6 23:57:23.699595 containerd[1734]: time="2025-07-06T23:57:23.699110986Z" level=info msg="Forcibly stopping sandbox \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\"" Jul 6 23:57:23.807537 containerd[1734]: 2025-07-06 23:57:23.755 [WARNING][6013] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"68b343a0-0dad-4108-a9c1-a80798ac4e3d", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"7b1db669927df7bb4516e982c055cc80daee3ffe5d8a0a1c7640d1fff377f2ca", Pod:"coredns-7c65d6cfc9-qrbvq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e599d484e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:23.807537 containerd[1734]: 2025-07-06 23:57:23.755 [INFO][6013] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Jul 6 23:57:23.807537 containerd[1734]: 2025-07-06 23:57:23.756 [INFO][6013] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" iface="eth0" netns="" Jul 6 23:57:23.807537 containerd[1734]: 2025-07-06 23:57:23.756 [INFO][6013] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Jul 6 23:57:23.807537 containerd[1734]: 2025-07-06 23:57:23.756 [INFO][6013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Jul 6 23:57:23.807537 containerd[1734]: 2025-07-06 23:57:23.792 [INFO][6021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" HandleID="k8s-pod-network.d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:23.807537 containerd[1734]: 2025-07-06 23:57:23.794 [INFO][6021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:23.807537 containerd[1734]: 2025-07-06 23:57:23.794 [INFO][6021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:23.807537 containerd[1734]: 2025-07-06 23:57:23.802 [WARNING][6021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" HandleID="k8s-pod-network.d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:23.807537 containerd[1734]: 2025-07-06 23:57:23.802 [INFO][6021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" HandleID="k8s-pod-network.d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--qrbvq-eth0" Jul 6 23:57:23.807537 containerd[1734]: 2025-07-06 23:57:23.803 [INFO][6021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:23.807537 containerd[1734]: 2025-07-06 23:57:23.805 [INFO][6013] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25" Jul 6 23:57:23.808199 containerd[1734]: time="2025-07-06T23:57:23.807534605Z" level=info msg="TearDown network for sandbox \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\" successfully" Jul 6 23:57:23.817514 containerd[1734]: time="2025-07-06T23:57:23.816595698Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:23.817514 containerd[1734]: time="2025-07-06T23:57:23.816737700Z" level=info msg="RemovePodSandbox \"d2a5795d96dc740ebd7b5deafe0dea4df208302403fe7556d6c01366ca542d25\" returns successfully" Jul 6 23:57:23.818561 containerd[1734]: time="2025-07-06T23:57:23.818243215Z" level=info msg="StopPodSandbox for \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\"" Jul 6 23:57:23.921061 containerd[1734]: 2025-07-06 23:57:23.873 [WARNING][6037] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"13c25702-c1ec-47b6-aa0c-d133d6bd76b6", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b", Pod:"coredns-7c65d6cfc9-8bb5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5631955af2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:23.921061 containerd[1734]: 2025-07-06 23:57:23.873 [INFO][6037] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Jul 6 23:57:23.921061 containerd[1734]: 2025-07-06 23:57:23.873 [INFO][6037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" iface="eth0" netns="" Jul 6 23:57:23.921061 containerd[1734]: 2025-07-06 23:57:23.873 [INFO][6037] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Jul 6 23:57:23.921061 containerd[1734]: 2025-07-06 23:57:23.873 [INFO][6037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Jul 6 23:57:23.921061 containerd[1734]: 2025-07-06 23:57:23.907 [INFO][6044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" HandleID="k8s-pod-network.afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:23.921061 containerd[1734]: 2025-07-06 23:57:23.907 [INFO][6044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:23.921061 containerd[1734]: 2025-07-06 23:57:23.907 [INFO][6044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:23.921061 containerd[1734]: 2025-07-06 23:57:23.914 [WARNING][6044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" HandleID="k8s-pod-network.afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:23.921061 containerd[1734]: 2025-07-06 23:57:23.914 [INFO][6044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" HandleID="k8s-pod-network.afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:23.921061 containerd[1734]: 2025-07-06 23:57:23.915 [INFO][6044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:23.921061 containerd[1734]: 2025-07-06 23:57:23.917 [INFO][6037] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Jul 6 23:57:23.922848 containerd[1734]: time="2025-07-06T23:57:23.921754684Z" level=info msg="TearDown network for sandbox \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\" successfully" Jul 6 23:57:23.922848 containerd[1734]: time="2025-07-06T23:57:23.921809884Z" level=info msg="StopPodSandbox for \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\" returns successfully" Jul 6 23:57:23.924152 containerd[1734]: time="2025-07-06T23:57:23.923718704Z" level=info msg="RemovePodSandbox for \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\"" Jul 6 23:57:23.924152 containerd[1734]: time="2025-07-06T23:57:23.923775405Z" level=info msg="Forcibly stopping sandbox \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\"" Jul 6 23:57:24.028113 containerd[1734]: 2025-07-06 23:57:23.983 [WARNING][6058] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"13c25702-c1ec-47b6-aa0c-d133d6bd76b6", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"fca28f543d17439de6bdcc4bec6cd1a171d5e04800c8990055d5a56c0a24358b", Pod:"coredns-7c65d6cfc9-8bb5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5631955af2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:24.028113 containerd[1734]: 2025-07-06 23:57:23.983 [INFO][6058] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Jul 6 23:57:24.028113 containerd[1734]: 2025-07-06 23:57:23.983 [INFO][6058] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" iface="eth0" netns="" Jul 6 23:57:24.028113 containerd[1734]: 2025-07-06 23:57:23.983 [INFO][6058] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Jul 6 23:57:24.028113 containerd[1734]: 2025-07-06 23:57:23.984 [INFO][6058] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Jul 6 23:57:24.028113 containerd[1734]: 2025-07-06 23:57:24.014 [INFO][6065] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" HandleID="k8s-pod-network.afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:24.028113 containerd[1734]: 2025-07-06 23:57:24.014 [INFO][6065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:24.028113 containerd[1734]: 2025-07-06 23:57:24.014 [INFO][6065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:24.028113 containerd[1734]: 2025-07-06 23:57:24.022 [WARNING][6065] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" HandleID="k8s-pod-network.afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:24.028113 containerd[1734]: 2025-07-06 23:57:24.022 [INFO][6065] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" HandleID="k8s-pod-network.afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Workload="ci--4081.3.4--a--6a836f1a00-k8s-coredns--7c65d6cfc9--8bb5r-eth0" Jul 6 23:57:24.028113 containerd[1734]: 2025-07-06 23:57:24.024 [INFO][6065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:24.028113 containerd[1734]: 2025-07-06 23:57:24.026 [INFO][6058] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd" Jul 6 23:57:24.028876 containerd[1734]: time="2025-07-06T23:57:24.028177282Z" level=info msg="TearDown network for sandbox \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\" successfully" Jul 6 23:57:24.037245 containerd[1734]: time="2025-07-06T23:57:24.037151375Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:24.037398 containerd[1734]: time="2025-07-06T23:57:24.037306577Z" level=info msg="RemovePodSandbox \"afbbdcaf8923a8a0bfb42670adaebf283903a5c7d608ea6c851f94dd21ced1fd\" returns successfully" Jul 6 23:57:24.039369 containerd[1734]: time="2025-07-06T23:57:24.038860693Z" level=info msg="StopPodSandbox for \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\"" Jul 6 23:57:24.209480 containerd[1734]: 2025-07-06 23:57:24.133 [WARNING][6079] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0", GenerateName:"calico-kube-controllers-5647f696c4-", Namespace:"calico-system", SelfLink:"", UID:"5065e550-3d85-474c-9080-cbf916c3e61c", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5647f696c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543", Pod:"calico-kube-controllers-5647f696c4-hvltb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ad5bd7e259", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:24.209480 containerd[1734]: 2025-07-06 23:57:24.134 [INFO][6079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Jul 6 23:57:24.209480 containerd[1734]: 2025-07-06 23:57:24.134 [INFO][6079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" iface="eth0" netns="" Jul 6 23:57:24.209480 containerd[1734]: 2025-07-06 23:57:24.134 [INFO][6079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Jul 6 23:57:24.209480 containerd[1734]: 2025-07-06 23:57:24.134 [INFO][6079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Jul 6 23:57:24.209480 containerd[1734]: 2025-07-06 23:57:24.179 [INFO][6086] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" HandleID="k8s-pod-network.78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:24.209480 containerd[1734]: 2025-07-06 23:57:24.179 [INFO][6086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:24.209480 containerd[1734]: 2025-07-06 23:57:24.179 [INFO][6086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:24.209480 containerd[1734]: 2025-07-06 23:57:24.201 [WARNING][6086] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" HandleID="k8s-pod-network.78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:24.209480 containerd[1734]: 2025-07-06 23:57:24.201 [INFO][6086] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" HandleID="k8s-pod-network.78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:24.209480 containerd[1734]: 2025-07-06 23:57:24.203 [INFO][6086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:24.209480 containerd[1734]: 2025-07-06 23:57:24.207 [INFO][6079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Jul 6 23:57:24.209480 containerd[1734]: time="2025-07-06T23:57:24.209317152Z" level=info msg="TearDown network for sandbox \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\" successfully" Jul 6 23:57:24.209480 containerd[1734]: time="2025-07-06T23:57:24.209349152Z" level=info msg="StopPodSandbox for \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\" returns successfully" Jul 6 23:57:24.211154 containerd[1734]: time="2025-07-06T23:57:24.211000669Z" level=info msg="RemovePodSandbox for \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\"" Jul 6 23:57:24.211602 containerd[1734]: time="2025-07-06T23:57:24.211167371Z" level=info msg="Forcibly stopping sandbox \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\"" Jul 6 23:57:24.244118 systemd[1]: run-containerd-runc-k8s.io-1e2cfa6d8eb2f49efa98f878e42285df1020de33f34511338179683397067f89-runc.fMeWTl.mount: Deactivated successfully. Jul 6 23:57:24.376717 containerd[1734]: 2025-07-06 23:57:24.274 [WARNING][6100] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0", GenerateName:"calico-kube-controllers-5647f696c4-", Namespace:"calico-system", SelfLink:"", UID:"5065e550-3d85-474c-9080-cbf916c3e61c", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5647f696c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"f3a47cd59718274efa46c0294262a63831b580e6be8fda9381929958288af543", Pod:"calico-kube-controllers-5647f696c4-hvltb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ad5bd7e259", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:24.376717 containerd[1734]: 2025-07-06 23:57:24.274 [INFO][6100] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Jul 6 23:57:24.376717 containerd[1734]: 2025-07-06 23:57:24.274 [INFO][6100] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" iface="eth0" netns="" Jul 6 23:57:24.376717 containerd[1734]: 2025-07-06 23:57:24.274 [INFO][6100] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Jul 6 23:57:24.376717 containerd[1734]: 2025-07-06 23:57:24.274 [INFO][6100] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Jul 6 23:57:24.376717 containerd[1734]: 2025-07-06 23:57:24.348 [INFO][6107] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" HandleID="k8s-pod-network.78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:24.376717 containerd[1734]: 2025-07-06 23:57:24.349 [INFO][6107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:24.376717 containerd[1734]: 2025-07-06 23:57:24.349 [INFO][6107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:24.376717 containerd[1734]: 2025-07-06 23:57:24.366 [WARNING][6107] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" HandleID="k8s-pod-network.78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:24.376717 containerd[1734]: 2025-07-06 23:57:24.367 [INFO][6107] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" HandleID="k8s-pod-network.78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--kube--controllers--5647f696c4--hvltb-eth0" Jul 6 23:57:24.376717 containerd[1734]: 2025-07-06 23:57:24.371 [INFO][6107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:24.376717 containerd[1734]: 2025-07-06 23:57:24.374 [INFO][6100] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4" Jul 6 23:57:24.379525 containerd[1734]: time="2025-07-06T23:57:24.377806791Z" level=info msg="TearDown network for sandbox \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\" successfully" Jul 6 23:57:24.390844 containerd[1734]: time="2025-07-06T23:57:24.390794625Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:24.391319 containerd[1734]: time="2025-07-06T23:57:24.391287230Z" level=info msg="RemovePodSandbox \"78c44a76166ae32e66e5f1d63d61a99acf47754398b08f8990b9fc538ae8d3a4\" returns successfully" Jul 6 23:57:24.393934 containerd[1734]: time="2025-07-06T23:57:24.393900857Z" level=info msg="StopPodSandbox for \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\"" Jul 6 23:57:24.483540 kubelet[3188]: I0706 23:57:24.483311 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-kl5jd" podStartSLOduration=30.924370254 podStartE2EDuration="44.483242079s" podCreationTimestamp="2025-07-06 23:56:40 +0000 UTC" firstStartedPulling="2025-07-06 23:57:09.711551036 +0000 UTC m=+49.781146597" lastFinishedPulling="2025-07-06 23:57:23.270422961 +0000 UTC m=+63.340018422" observedRunningTime="2025-07-06 23:57:24.483053177 +0000 UTC m=+64.552648738" watchObservedRunningTime="2025-07-06 23:57:24.483242079 +0000 UTC m=+64.552837640" Jul 6 23:57:24.616279 containerd[1734]: 2025-07-06 23:57:24.546 [WARNING][6121] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5edfa730-d45f-46ca-a7ed-ef497ff8b782", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca", Pod:"goldmane-58fd7646b9-kl5jd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.92.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1a77e00d823", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:24.616279 containerd[1734]: 2025-07-06 23:57:24.547 [INFO][6121] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Jul 6 23:57:24.616279 containerd[1734]: 2025-07-06 23:57:24.547 [INFO][6121] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" iface="eth0" netns="" Jul 6 23:57:24.616279 containerd[1734]: 2025-07-06 23:57:24.547 [INFO][6121] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Jul 6 23:57:24.616279 containerd[1734]: 2025-07-06 23:57:24.547 [INFO][6121] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Jul 6 23:57:24.616279 containerd[1734]: 2025-07-06 23:57:24.600 [INFO][6141] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" HandleID="k8s-pod-network.1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Workload="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:24.616279 containerd[1734]: 2025-07-06 23:57:24.600 [INFO][6141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:24.616279 containerd[1734]: 2025-07-06 23:57:24.600 [INFO][6141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:24.616279 containerd[1734]: 2025-07-06 23:57:24.611 [WARNING][6141] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" HandleID="k8s-pod-network.1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Workload="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:24.616279 containerd[1734]: 2025-07-06 23:57:24.611 [INFO][6141] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" HandleID="k8s-pod-network.1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Workload="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:24.616279 containerd[1734]: 2025-07-06 23:57:24.613 [INFO][6141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:24.616279 containerd[1734]: 2025-07-06 23:57:24.614 [INFO][6121] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Jul 6 23:57:24.618047 containerd[1734]: time="2025-07-06T23:57:24.616320353Z" level=info msg="TearDown network for sandbox \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\" successfully" Jul 6 23:57:24.618047 containerd[1734]: time="2025-07-06T23:57:24.616352053Z" level=info msg="StopPodSandbox for \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\" returns successfully" Jul 6 23:57:24.618047 containerd[1734]: time="2025-07-06T23:57:24.617173962Z" level=info msg="RemovePodSandbox for \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\"" Jul 6 23:57:24.618047 containerd[1734]: time="2025-07-06T23:57:24.617207462Z" level=info msg="Forcibly stopping sandbox \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\"" Jul 6 23:57:24.834243 containerd[1734]: 2025-07-06 23:57:24.704 [WARNING][6161] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5edfa730-d45f-46ca-a7ed-ef497ff8b782", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"4251ca22b3e4e0fbbaefe9580a14ea0fe60358662fdb06e7ff31e20728541bca", Pod:"goldmane-58fd7646b9-kl5jd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.92.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1a77e00d823", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:24.834243 containerd[1734]: 2025-07-06 23:57:24.705 [INFO][6161] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Jul 6 23:57:24.834243 containerd[1734]: 2025-07-06 23:57:24.705 [INFO][6161] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" iface="eth0" netns="" Jul 6 23:57:24.834243 containerd[1734]: 2025-07-06 23:57:24.706 [INFO][6161] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Jul 6 23:57:24.834243 containerd[1734]: 2025-07-06 23:57:24.706 [INFO][6161] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Jul 6 23:57:24.834243 containerd[1734]: 2025-07-06 23:57:24.803 [INFO][6173] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" HandleID="k8s-pod-network.1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Workload="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:24.834243 containerd[1734]: 2025-07-06 23:57:24.805 [INFO][6173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:24.834243 containerd[1734]: 2025-07-06 23:57:24.806 [INFO][6173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:24.834243 containerd[1734]: 2025-07-06 23:57:24.818 [WARNING][6173] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" HandleID="k8s-pod-network.1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Workload="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:24.834243 containerd[1734]: 2025-07-06 23:57:24.818 [INFO][6173] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" HandleID="k8s-pod-network.1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Workload="ci--4081.3.4--a--6a836f1a00-k8s-goldmane--58fd7646b9--kl5jd-eth0" Jul 6 23:57:24.834243 containerd[1734]: 2025-07-06 23:57:24.822 [INFO][6173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:24.834243 containerd[1734]: 2025-07-06 23:57:24.827 [INFO][6161] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1" Jul 6 23:57:24.836742 containerd[1734]: time="2025-07-06T23:57:24.834560506Z" level=info msg="TearDown network for sandbox \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\" successfully" Jul 6 23:57:24.851541 containerd[1734]: time="2025-07-06T23:57:24.851482680Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:24.852708 containerd[1734]: time="2025-07-06T23:57:24.851911085Z" level=info msg="RemovePodSandbox \"1173c9893f5e6b7a496503dd188f0632fc0e2bbc4629f9d8e43ebd69e11760b1\" returns successfully" Jul 6 23:57:24.855001 containerd[1734]: time="2025-07-06T23:57:24.854965116Z" level=info msg="StopPodSandbox for \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\"" Jul 6 23:57:25.140992 containerd[1734]: 2025-07-06 23:57:24.994 [WARNING][6190] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"17855cdd-9fca-46b9-9af2-cad254d32cd1", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a", Pod:"csi-node-driver-9vfbs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0a8c920207e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:25.140992 containerd[1734]: 2025-07-06 23:57:24.995 [INFO][6190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Jul 6 23:57:25.140992 containerd[1734]: 2025-07-06 23:57:24.995 [INFO][6190] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" iface="eth0" netns="" Jul 6 23:57:25.140992 containerd[1734]: 2025-07-06 23:57:24.995 [INFO][6190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Jul 6 23:57:25.140992 containerd[1734]: 2025-07-06 23:57:24.996 [INFO][6190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Jul 6 23:57:25.140992 containerd[1734]: 2025-07-06 23:57:25.114 [INFO][6199] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" HandleID="k8s-pod-network.c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Workload="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:25.140992 containerd[1734]: 2025-07-06 23:57:25.114 [INFO][6199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:25.140992 containerd[1734]: 2025-07-06 23:57:25.114 [INFO][6199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:25.140992 containerd[1734]: 2025-07-06 23:57:25.125 [WARNING][6199] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" HandleID="k8s-pod-network.c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Workload="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:25.140992 containerd[1734]: 2025-07-06 23:57:25.125 [INFO][6199] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" HandleID="k8s-pod-network.c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Workload="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:25.140992 containerd[1734]: 2025-07-06 23:57:25.127 [INFO][6199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:25.140992 containerd[1734]: 2025-07-06 23:57:25.135 [INFO][6190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Jul 6 23:57:25.140992 containerd[1734]: time="2025-07-06T23:57:25.140153360Z" level=info msg="TearDown network for sandbox \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\" successfully" Jul 6 23:57:25.140992 containerd[1734]: time="2025-07-06T23:57:25.140186160Z" level=info msg="StopPodSandbox for \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\" returns successfully" Jul 6 23:57:25.142399 containerd[1734]: time="2025-07-06T23:57:25.142141080Z" level=info msg="RemovePodSandbox for \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\"" Jul 6 23:57:25.142399 containerd[1734]: time="2025-07-06T23:57:25.142179181Z" level=info msg="Forcibly stopping sandbox \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\"" Jul 6 23:57:25.315210 containerd[1734]: time="2025-07-06T23:57:25.315146466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:25.319511 containerd[1734]: time="2025-07-06T23:57:25.319436610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 6 23:57:25.326546 containerd[1734]: time="2025-07-06T23:57:25.326488083Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:25.333322 containerd[1734]: time="2025-07-06T23:57:25.333267953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:25.336144 containerd[1734]: time="2025-07-06T23:57:25.336091982Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.063525299s" Jul 6 23:57:25.336271 containerd[1734]: time="2025-07-06T23:57:25.336146283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 6 23:57:25.341155 containerd[1734]: time="2025-07-06T23:57:25.341111534Z" level=info msg="CreateContainer within sandbox \"6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:57:25.346058 containerd[1734]: 2025-07-06 23:57:25.254 [WARNING][6215] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"17855cdd-9fca-46b9-9af2-cad254d32cd1", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a", Pod:"csi-node-driver-9vfbs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0a8c920207e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:25.346058 containerd[1734]: 2025-07-06 23:57:25.255 [INFO][6215] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Jul 6 23:57:25.346058 containerd[1734]: 2025-07-06 23:57:25.255 [INFO][6215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" iface="eth0" netns="" Jul 6 23:57:25.346058 containerd[1734]: 2025-07-06 23:57:25.255 [INFO][6215] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Jul 6 23:57:25.346058 containerd[1734]: 2025-07-06 23:57:25.255 [INFO][6215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Jul 6 23:57:25.346058 containerd[1734]: 2025-07-06 23:57:25.312 [INFO][6224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" HandleID="k8s-pod-network.c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Workload="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:25.346058 containerd[1734]: 2025-07-06 23:57:25.312 [INFO][6224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:25.346058 containerd[1734]: 2025-07-06 23:57:25.312 [INFO][6224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:25.346058 containerd[1734]: 2025-07-06 23:57:25.327 [WARNING][6224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" HandleID="k8s-pod-network.c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Workload="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:25.346058 containerd[1734]: 2025-07-06 23:57:25.327 [INFO][6224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" HandleID="k8s-pod-network.c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Workload="ci--4081.3.4--a--6a836f1a00-k8s-csi--node--driver--9vfbs-eth0" Jul 6 23:57:25.346058 containerd[1734]: 2025-07-06 23:57:25.338 [INFO][6224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:25.346058 containerd[1734]: 2025-07-06 23:57:25.342 [INFO][6215] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403" Jul 6 23:57:25.346058 containerd[1734]: time="2025-07-06T23:57:25.344779172Z" level=info msg="TearDown network for sandbox \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\" successfully" Jul 6 23:57:25.367622 containerd[1734]: time="2025-07-06T23:57:25.367566507Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:25.367763 containerd[1734]: time="2025-07-06T23:57:25.367665208Z" level=info msg="RemovePodSandbox \"c014c6279fe0a90e634ddb2c537420ab7ef0a4bc7e8acfece761cc4c1a675403\" returns successfully" Jul 6 23:57:25.368929 containerd[1734]: time="2025-07-06T23:57:25.368696119Z" level=info msg="StopPodSandbox for \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\"" Jul 6 23:57:25.395240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415604622.mount: Deactivated successfully. Jul 6 23:57:25.396110 containerd[1734]: time="2025-07-06T23:57:25.395684797Z" level=info msg="CreateContainer within sandbox \"6d4351c854973ca56c909a2af6e8b635ec247f86824c73795f84527cfd5e998a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ac498e8206263c66959316a94d690ba52b4e36535b095ebe9dcab90839fd2185\"" Jul 6 23:57:25.401712 containerd[1734]: time="2025-07-06T23:57:25.399256834Z" level=info msg="StartContainer for \"ac498e8206263c66959316a94d690ba52b4e36535b095ebe9dcab90839fd2185\"" Jul 6 23:57:25.475078 systemd[1]: run-containerd-runc-k8s.io-ac498e8206263c66959316a94d690ba52b4e36535b095ebe9dcab90839fd2185-runc.MrNQra.mount: Deactivated successfully. Jul 6 23:57:25.490719 systemd[1]: Started cri-containerd-ac498e8206263c66959316a94d690ba52b4e36535b095ebe9dcab90839fd2185.scope - libcontainer container ac498e8206263c66959316a94d690ba52b4e36535b095ebe9dcab90839fd2185. Jul 6 23:57:25.597095 containerd[1734]: 2025-07-06 23:57:25.507 [WARNING][6239] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0", GenerateName:"calico-apiserver-6cbd67b8cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f831b75-68df-43d2-bb47-b29865b10566", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbd67b8cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54", Pod:"calico-apiserver-6cbd67b8cf-chfgm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b5e1c78d4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:25.597095 containerd[1734]: 2025-07-06 23:57:25.507 [INFO][6239] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Jul 6 23:57:25.597095 containerd[1734]: 2025-07-06 23:57:25.507 [INFO][6239] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" iface="eth0" netns="" Jul 6 23:57:25.597095 containerd[1734]: 2025-07-06 23:57:25.507 [INFO][6239] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Jul 6 23:57:25.597095 containerd[1734]: 2025-07-06 23:57:25.507 [INFO][6239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Jul 6 23:57:25.597095 containerd[1734]: 2025-07-06 23:57:25.558 [INFO][6273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" HandleID="k8s-pod-network.842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:25.597095 containerd[1734]: 2025-07-06 23:57:25.558 [INFO][6273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:25.597095 containerd[1734]: 2025-07-06 23:57:25.558 [INFO][6273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:25.597095 containerd[1734]: 2025-07-06 23:57:25.578 [WARNING][6273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" HandleID="k8s-pod-network.842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:25.597095 containerd[1734]: 2025-07-06 23:57:25.578 [INFO][6273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" HandleID="k8s-pod-network.842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:25.597095 containerd[1734]: 2025-07-06 23:57:25.593 [INFO][6273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:25.597095 containerd[1734]: 2025-07-06 23:57:25.595 [INFO][6239] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Jul 6 23:57:25.597769 containerd[1734]: time="2025-07-06T23:57:25.597160077Z" level=info msg="TearDown network for sandbox \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\" successfully" Jul 6 23:57:25.597769 containerd[1734]: time="2025-07-06T23:57:25.597192777Z" level=info msg="StopPodSandbox for \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\" returns successfully" Jul 6 23:57:25.597859 containerd[1734]: time="2025-07-06T23:57:25.597816984Z" level=info msg="RemovePodSandbox for \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\"" Jul 6 23:57:25.597859 containerd[1734]: time="2025-07-06T23:57:25.597849084Z" level=info msg="Forcibly stopping sandbox \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\"" Jul 6 23:57:25.664936 containerd[1734]: time="2025-07-06T23:57:25.664801875Z" level=info msg="StartContainer for \"ac498e8206263c66959316a94d690ba52b4e36535b095ebe9dcab90839fd2185\" returns successfully" Jul 6 23:57:25.746917 containerd[1734]: 2025-07-06 23:57:25.689 [WARNING][6299] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0", GenerateName:"calico-apiserver-6cbd67b8cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f831b75-68df-43d2-bb47-b29865b10566", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbd67b8cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-6a836f1a00", ContainerID:"4a71a28b133158652d0c940323b956ab1d9b408fe996a091b83d32c105156b54", Pod:"calico-apiserver-6cbd67b8cf-chfgm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b5e1c78d4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:25.746917 containerd[1734]: 2025-07-06 23:57:25.691 [INFO][6299] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Jul 6 23:57:25.746917 containerd[1734]: 2025-07-06 23:57:25.691 [INFO][6299] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" iface="eth0" netns="" Jul 6 23:57:25.746917 containerd[1734]: 2025-07-06 23:57:25.691 [INFO][6299] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Jul 6 23:57:25.746917 containerd[1734]: 2025-07-06 23:57:25.691 [INFO][6299] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Jul 6 23:57:25.746917 containerd[1734]: 2025-07-06 23:57:25.727 [INFO][6320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" HandleID="k8s-pod-network.842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:25.746917 containerd[1734]: 2025-07-06 23:57:25.727 [INFO][6320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:25.746917 containerd[1734]: 2025-07-06 23:57:25.728 [INFO][6320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:25.746917 containerd[1734]: 2025-07-06 23:57:25.741 [WARNING][6320] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" HandleID="k8s-pod-network.842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:25.746917 containerd[1734]: 2025-07-06 23:57:25.741 [INFO][6320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" HandleID="k8s-pod-network.842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Workload="ci--4081.3.4--a--6a836f1a00-k8s-calico--apiserver--6cbd67b8cf--chfgm-eth0" Jul 6 23:57:25.746917 containerd[1734]: 2025-07-06 23:57:25.743 [INFO][6320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:25.746917 containerd[1734]: 2025-07-06 23:57:25.745 [INFO][6299] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e" Jul 6 23:57:25.747605 containerd[1734]: time="2025-07-06T23:57:25.746960823Z" level=info msg="TearDown network for sandbox \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\" successfully" Jul 6 23:57:25.756985 containerd[1734]: time="2025-07-06T23:57:25.756932126Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:25.757143 containerd[1734]: time="2025-07-06T23:57:25.757055627Z" level=info msg="RemovePodSandbox \"842046926aacd76cba2fd80e01c1245853895554c9c3fef42407280b4f98cc4e\" returns successfully" Jul 6 23:57:26.156047 kubelet[3188]: I0706 23:57:26.155991 3188 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:57:26.156995 kubelet[3188]: I0706 23:57:26.156151 3188 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:57:45.530501 kubelet[3188]: I0706 23:57:45.530139 3188 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:57:45.599222 kubelet[3188]: I0706 23:57:45.596606 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9vfbs" podStartSLOduration=47.226042434 podStartE2EDuration="1m5.596579755s" podCreationTimestamp="2025-07-06 23:56:40 +0000 UTC" firstStartedPulling="2025-07-06 23:57:06.966830174 +0000 UTC m=+47.036425635" lastFinishedPulling="2025-07-06 23:57:25.337367395 +0000 UTC m=+65.406962956" observedRunningTime="2025-07-06 23:57:26.537808486 +0000 UTC m=+66.607403947" watchObservedRunningTime="2025-07-06 23:57:45.596579755 +0000 UTC m=+85.666175216" Jul 6 23:57:53.261766 systemd[1]: run-containerd-runc-k8s.io-8461ef3509943c754fd49807069a565616aa2a441a4fad561eea47e2476d9460-runc.oU6cqk.mount: Deactivated successfully. Jul 6 23:57:54.215065 systemd[1]: run-containerd-runc-k8s.io-1e2cfa6d8eb2f49efa98f878e42285df1020de33f34511338179683397067f89-runc.Q8n8RO.mount: Deactivated successfully. Jul 6 23:58:11.800828 systemd[1]: Started sshd@7-10.200.8.46:22-10.200.16.10:49478.service - OpenSSH per-connection server daemon (10.200.16.10:49478). Jul 6 23:58:12.422010 sshd[6480]: Accepted publickey for core from 10.200.16.10 port 49478 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:12.424016 sshd[6480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:12.429079 systemd-logind[1705]: New session 10 of user core. Jul 6 23:58:12.435654 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:58:12.944711 sshd[6480]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:12.949991 systemd[1]: sshd@7-10.200.8.46:22-10.200.16.10:49478.service: Deactivated successfully. Jul 6 23:58:12.954988 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:58:12.956320 systemd-logind[1705]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:58:12.958131 systemd-logind[1705]: Removed session 10. Jul 6 23:58:18.067792 systemd[1]: Started sshd@8-10.200.8.46:22-10.200.16.10:49490.service - OpenSSH per-connection server daemon (10.200.16.10:49490). Jul 6 23:58:18.696594 sshd[6494]: Accepted publickey for core from 10.200.16.10 port 49490 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:18.698105 sshd[6494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:18.706318 systemd-logind[1705]: New session 11 of user core. Jul 6 23:58:18.712696 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:58:19.264827 sshd[6494]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:19.269778 systemd[1]: sshd@8-10.200.8.46:22-10.200.16.10:49490.service: Deactivated successfully. Jul 6 23:58:19.270535 systemd-logind[1705]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:58:19.274709 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:58:19.277934 systemd-logind[1705]: Removed session 11. Jul 6 23:58:24.376350 systemd[1]: Started sshd@9-10.200.8.46:22-10.200.16.10:60012.service - OpenSSH per-connection server daemon (10.200.16.10:60012). Jul 6 23:58:25.008591 sshd[6549]: Accepted publickey for core from 10.200.16.10 port 60012 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:25.010127 sshd[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:25.016445 systemd-logind[1705]: New session 12 of user core. Jul 6 23:58:25.022658 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:58:25.525095 sshd[6549]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:25.529563 systemd-logind[1705]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:58:25.530952 systemd[1]: sshd@9-10.200.8.46:22-10.200.16.10:60012.service: Deactivated successfully. Jul 6 23:58:25.534001 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:58:25.535856 systemd-logind[1705]: Removed session 12. Jul 6 23:58:25.645799 systemd[1]: Started sshd@10-10.200.8.46:22-10.200.16.10:60020.service - OpenSSH per-connection server daemon (10.200.16.10:60020). Jul 6 23:58:26.269411 sshd[6569]: Accepted publickey for core from 10.200.16.10 port 60020 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:26.271151 sshd[6569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:26.276237 systemd-logind[1705]: New session 13 of user core. Jul 6 23:58:26.282734 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:58:26.844529 sshd[6569]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:26.848166 systemd[1]: sshd@10-10.200.8.46:22-10.200.16.10:60020.service: Deactivated successfully. Jul 6 23:58:26.851565 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:58:26.853248 systemd-logind[1705]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:58:26.855306 systemd-logind[1705]: Removed session 13. Jul 6 23:58:26.959813 systemd[1]: Started sshd@11-10.200.8.46:22-10.200.16.10:60026.service - OpenSSH per-connection server daemon (10.200.16.10:60026). Jul 6 23:58:27.587543 sshd[6582]: Accepted publickey for core from 10.200.16.10 port 60026 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:27.588540 sshd[6582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:27.595185 systemd-logind[1705]: New session 14 of user core. Jul 6 23:58:27.602685 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:58:28.099152 sshd[6582]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:28.103910 systemd-logind[1705]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:58:28.104830 systemd[1]: sshd@11-10.200.8.46:22-10.200.16.10:60026.service: Deactivated successfully. Jul 6 23:58:28.109999 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:58:28.111241 systemd-logind[1705]: Removed session 14. Jul 6 23:58:33.219022 systemd[1]: Started sshd@12-10.200.8.46:22-10.200.16.10:48882.service - OpenSSH per-connection server daemon (10.200.16.10:48882). Jul 6 23:58:33.838524 sshd[6601]: Accepted publickey for core from 10.200.16.10 port 48882 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:33.840237 sshd[6601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:33.848750 systemd-logind[1705]: New session 15 of user core. Jul 6 23:58:33.853835 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:58:34.362098 sshd[6601]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:34.367404 systemd[1]: sshd@12-10.200.8.46:22-10.200.16.10:48882.service: Deactivated successfully. Jul 6 23:58:34.369767 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:58:34.370614 systemd-logind[1705]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:58:34.371712 systemd-logind[1705]: Removed session 15. Jul 6 23:58:39.477873 systemd[1]: Started sshd@13-10.200.8.46:22-10.200.16.10:48890.service - OpenSSH per-connection server daemon (10.200.16.10:48890). Jul 6 23:58:39.987064 systemd[1]: run-containerd-runc-k8s.io-4a0dd2bdd7585db6ccb219ad8e07fe1d6741c7e5ba0025ffa8349a02498d76d1-runc.vCpQUX.mount: Deactivated successfully. Jul 6 23:58:40.101406 sshd[6634]: Accepted publickey for core from 10.200.16.10 port 48890 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:40.103081 sshd[6634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:40.108096 systemd-logind[1705]: New session 16 of user core. Jul 6 23:58:40.115715 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:58:40.612104 sshd[6634]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:40.616397 systemd[1]: sshd@13-10.200.8.46:22-10.200.16.10:48890.service: Deactivated successfully. Jul 6 23:58:40.618969 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:58:40.620989 systemd-logind[1705]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:58:40.622015 systemd-logind[1705]: Removed session 16. Jul 6 23:58:41.703020 update_engine[1707]: I20250706 23:58:41.702947 1707 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 6 23:58:41.703020 update_engine[1707]: I20250706 23:58:41.703006 1707 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 6 23:58:41.703556 update_engine[1707]: I20250706 23:58:41.703222 1707 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 6 23:58:41.705113 update_engine[1707]: I20250706 23:58:41.704294 1707 omaha_request_params.cc:62] Current group set to lts Jul 6 23:58:41.705113 update_engine[1707]: I20250706 23:58:41.704440 1707 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 6 23:58:41.705113 update_engine[1707]: I20250706 23:58:41.704456 1707 update_attempter.cc:643] Scheduling an action processor start. Jul 6 23:58:41.705113 update_engine[1707]: I20250706 23:58:41.704509 1707 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 6 23:58:41.705113 update_engine[1707]: I20250706 23:58:41.704548 1707 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 6 23:58:41.705578 locksmithd[1737]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 6 23:58:41.706430 update_engine[1707]: I20250706 23:58:41.705891 1707 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 6 23:58:41.706430 update_engine[1707]: I20250706 23:58:41.705915 1707 omaha_request_action.cc:272] Request: Jul 6 23:58:41.706430 update_engine[1707]: Jul 6 23:58:41.706430 update_engine[1707]: Jul 6 23:58:41.706430 update_engine[1707]: Jul 6 23:58:41.706430 update_engine[1707]: Jul 6 23:58:41.706430 update_engine[1707]: Jul 6 23:58:41.706430 update_engine[1707]: Jul 6 23:58:41.706430 update_engine[1707]: Jul 6 23:58:41.706430 update_engine[1707]: Jul 6 23:58:41.706430 update_engine[1707]: I20250706 23:58:41.705924 1707 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:58:41.708894 update_engine[1707]: I20250706 23:58:41.708500 1707 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:58:41.708894 update_engine[1707]: I20250706 23:58:41.708842 1707 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:58:41.729811 update_engine[1707]: E20250706 23:58:41.729743 1707 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:58:41.729963 update_engine[1707]: I20250706 23:58:41.729861 1707 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 6 23:58:45.728969 systemd[1]: Started sshd@14-10.200.8.46:22-10.200.16.10:53298.service - OpenSSH per-connection server daemon (10.200.16.10:53298). Jul 6 23:58:46.357727 sshd[6669]: Accepted publickey for core from 10.200.16.10 port 53298 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:46.359857 sshd[6669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:46.364670 systemd-logind[1705]: New session 17 of user core. Jul 6 23:58:46.368669 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:58:46.912143 sshd[6669]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:46.916119 systemd[1]: sshd@14-10.200.8.46:22-10.200.16.10:53298.service: Deactivated successfully. Jul 6 23:58:46.916770 systemd-logind[1705]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:58:46.921208 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:58:46.924571 systemd-logind[1705]: Removed session 17. Jul 6 23:58:51.707230 update_engine[1707]: I20250706 23:58:51.706508 1707 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:58:51.707230 update_engine[1707]: I20250706 23:58:51.706882 1707 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:58:51.707230 update_engine[1707]: I20250706 23:58:51.707134 1707 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:58:51.744766 update_engine[1707]: E20250706 23:58:51.744591 1707 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:58:51.744766 update_engine[1707]: I20250706 23:58:51.744719 1707 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 6 23:58:52.033728 systemd[1]: Started sshd@15-10.200.8.46:22-10.200.16.10:34226.service - OpenSSH per-connection server daemon (10.200.16.10:34226). Jul 6 23:58:52.667975 sshd[6682]: Accepted publickey for core from 10.200.16.10 port 34226 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:52.669629 sshd[6682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:52.674878 systemd-logind[1705]: New session 18 of user core. Jul 6 23:58:52.679658 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:58:53.175860 sshd[6682]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:53.180756 systemd-logind[1705]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:58:53.183876 systemd[1]: sshd@15-10.200.8.46:22-10.200.16.10:34226.service: Deactivated successfully. Jul 6 23:58:53.189844 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:58:53.192244 systemd-logind[1705]: Removed session 18. Jul 6 23:58:53.274708 systemd[1]: run-containerd-runc-k8s.io-8461ef3509943c754fd49807069a565616aa2a441a4fad561eea47e2476d9460-runc.31VE6k.mount: Deactivated successfully. Jul 6 23:58:53.299349 systemd[1]: Started sshd@16-10.200.8.46:22-10.200.16.10:34240.service - OpenSSH per-connection server daemon (10.200.16.10:34240). Jul 6 23:58:53.929159 sshd[6731]: Accepted publickey for core from 10.200.16.10 port 34240 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:53.930791 sshd[6731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:53.935640 systemd-logind[1705]: New session 19 of user core. Jul 6 23:58:53.945680 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:58:54.512258 sshd[6731]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:54.515803 systemd[1]: sshd@16-10.200.8.46:22-10.200.16.10:34240.service: Deactivated successfully. Jul 6 23:58:54.518188 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:58:54.520320 systemd-logind[1705]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:58:54.521444 systemd-logind[1705]: Removed session 19. Jul 6 23:58:54.633866 systemd[1]: Started sshd@17-10.200.8.46:22-10.200.16.10:34256.service - OpenSSH per-connection server daemon (10.200.16.10:34256). Jul 6 23:58:55.252060 sshd[6764]: Accepted publickey for core from 10.200.16.10 port 34256 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:55.253792 sshd[6764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:55.265994 systemd-logind[1705]: New session 20 of user core. Jul 6 23:58:55.273662 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:58:56.099779 systemd[1]: run-containerd-runc-k8s.io-8461ef3509943c754fd49807069a565616aa2a441a4fad561eea47e2476d9460-runc.6kRm9F.mount: Deactivated successfully. Jul 6 23:58:57.712713 sshd[6764]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:57.716368 systemd-logind[1705]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:58:57.717347 systemd[1]: sshd@17-10.200.8.46:22-10.200.16.10:34256.service: Deactivated successfully. Jul 6 23:58:57.720238 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:58:57.721935 systemd-logind[1705]: Removed session 20. Jul 6 23:58:57.837476 systemd[1]: Started sshd@18-10.200.8.46:22-10.200.16.10:34272.service - OpenSSH per-connection server daemon (10.200.16.10:34272). Jul 6 23:58:58.467523 sshd[6802]: Accepted publickey for core from 10.200.16.10 port 34272 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:58.471265 sshd[6802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:58.480088 systemd-logind[1705]: New session 21 of user core. Jul 6 23:58:58.484947 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:58:59.111997 sshd[6802]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:59.116779 systemd[1]: sshd@18-10.200.8.46:22-10.200.16.10:34272.service: Deactivated successfully. Jul 6 23:58:59.120659 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:58:59.121653 systemd-logind[1705]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:58:59.122784 systemd-logind[1705]: Removed session 21. Jul 6 23:58:59.234822 systemd[1]: Started sshd@19-10.200.8.46:22-10.200.16.10:34280.service - OpenSSH per-connection server daemon (10.200.16.10:34280). Jul 6 23:58:59.854790 sshd[6813]: Accepted publickey for core from 10.200.16.10 port 34280 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:59.857504 sshd[6813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:59.863919 systemd-logind[1705]: New session 22 of user core. Jul 6 23:58:59.872654 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:59:00.381962 sshd[6813]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:00.386955 systemd[1]: sshd@19-10.200.8.46:22-10.200.16.10:34280.service: Deactivated successfully. Jul 6 23:59:00.387165 systemd-logind[1705]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:59:00.390195 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:59:00.393292 systemd-logind[1705]: Removed session 22. Jul 6 23:59:01.702477 update_engine[1707]: I20250706 23:59:01.702401 1707 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:59:01.702961 update_engine[1707]: I20250706 23:59:01.702765 1707 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:59:01.703063 update_engine[1707]: I20250706 23:59:01.703028 1707 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:59:01.708158 update_engine[1707]: E20250706 23:59:01.707984 1707 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:59:01.708414 update_engine[1707]: I20250706 23:59:01.708307 1707 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 6 23:59:05.495849 systemd[1]: Started sshd@20-10.200.8.46:22-10.200.16.10:34888.service - OpenSSH per-connection server daemon (10.200.16.10:34888). Jul 6 23:59:06.120437 sshd[6826]: Accepted publickey for core from 10.200.16.10 port 34888 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:06.121092 sshd[6826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:06.126374 systemd-logind[1705]: New session 23 of user core. Jul 6 23:59:06.130648 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:59:06.621593 sshd[6826]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:06.625777 systemd-logind[1705]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:59:06.626705 systemd[1]: sshd@20-10.200.8.46:22-10.200.16.10:34888.service: Deactivated successfully. Jul 6 23:59:06.628982 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:59:06.630182 systemd-logind[1705]: Removed session 23. Jul 6 23:59:09.991861 systemd[1]: run-containerd-runc-k8s.io-4a0dd2bdd7585db6ccb219ad8e07fe1d6741c7e5ba0025ffa8349a02498d76d1-runc.pVb4Tq.mount: Deactivated successfully. Jul 6 23:59:11.702989 update_engine[1707]: I20250706 23:59:11.702899 1707 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:59:11.703556 update_engine[1707]: I20250706 23:59:11.703250 1707 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:59:11.703736 update_engine[1707]: I20250706 23:59:11.703655 1707 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:59:11.723009 update_engine[1707]: E20250706 23:59:11.722085 1707 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:59:11.723009 update_engine[1707]: I20250706 23:59:11.722189 1707 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 6 23:59:11.723009 update_engine[1707]: I20250706 23:59:11.722204 1707 omaha_request_action.cc:617] Omaha request response: Jul 6 23:59:11.723009 update_engine[1707]: E20250706 23:59:11.722301 1707 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 6 23:59:11.723009 update_engine[1707]: I20250706 23:59:11.722332 1707 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 6 23:59:11.723009 update_engine[1707]: I20250706 23:59:11.722338 1707 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:59:11.723009 update_engine[1707]: I20250706 23:59:11.722345 1707 update_attempter.cc:306] Processing Done. Jul 6 23:59:11.723009 update_engine[1707]: E20250706 23:59:11.722363 1707 update_attempter.cc:619] Update failed. Jul 6 23:59:11.723009 update_engine[1707]: I20250706 23:59:11.722373 1707 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 6 23:59:11.723009 update_engine[1707]: I20250706 23:59:11.722379 1707 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 6 23:59:11.723009 update_engine[1707]: I20250706 23:59:11.722386 1707 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 6 23:59:11.723009 update_engine[1707]: I20250706 23:59:11.722501 1707 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 6 23:59:11.723009 update_engine[1707]: I20250706 23:59:11.722536 1707 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 6 23:59:11.723009 update_engine[1707]: I20250706 23:59:11.722545 1707 omaha_request_action.cc:272] Request: Jul 6 23:59:11.723009 update_engine[1707]: Jul 6 23:59:11.723009 update_engine[1707]: Jul 6 23:59:11.723649 update_engine[1707]: Jul 6 23:59:11.723649 update_engine[1707]: Jul 6 23:59:11.723649 update_engine[1707]: Jul 6 23:59:11.723649 update_engine[1707]: Jul 6 23:59:11.723649 update_engine[1707]: I20250706 23:59:11.722553 1707 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:59:11.723649 update_engine[1707]: I20250706 23:59:11.722743 1707 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:59:11.723649 update_engine[1707]: I20250706 23:59:11.722959 1707 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:59:11.723875 locksmithd[1737]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 6 23:59:11.733819 systemd[1]: Started sshd@21-10.200.8.46:22-10.200.16.10:59846.service - OpenSSH per-connection server daemon (10.200.16.10:59846). Jul 6 23:59:11.820455 update_engine[1707]: E20250706 23:59:11.820384 1707 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:59:11.820629 update_engine[1707]: I20250706 23:59:11.820531 1707 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 6 23:59:11.820629 update_engine[1707]: I20250706 23:59:11.820546 1707 omaha_request_action.cc:617] Omaha request response: Jul 6 23:59:11.820629 update_engine[1707]: I20250706 23:59:11.820557 1707 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:59:11.820629 update_engine[1707]: I20250706 23:59:11.820567 1707 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:59:11.820629 update_engine[1707]: I20250706 23:59:11.820575 1707 update_attempter.cc:306] Processing Done. Jul 6 23:59:11.820629 update_engine[1707]: I20250706 23:59:11.820585 1707 update_attempter.cc:310] Error event sent. Jul 6 23:59:11.820629 update_engine[1707]: I20250706 23:59:11.820598 1707 update_check_scheduler.cc:74] Next update check in 44m1s Jul 6 23:59:11.821124 locksmithd[1737]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 6 23:59:12.360837 sshd[6862]: Accepted publickey for core from 10.200.16.10 port 59846 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:12.362672 sshd[6862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:12.368421 systemd-logind[1705]: New session 24 of user core. Jul 6 23:59:12.371627 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:59:12.862839 sshd[6862]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:12.868557 systemd-logind[1705]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:59:12.869132 systemd[1]: sshd@21-10.200.8.46:22-10.200.16.10:59846.service: Deactivated successfully. Jul 6 23:59:12.877106 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:59:12.879849 systemd-logind[1705]: Removed session 24. Jul 6 23:59:17.985645 systemd[1]: Started sshd@22-10.200.8.46:22-10.200.16.10:59858.service - OpenSSH per-connection server daemon (10.200.16.10:59858). Jul 6 23:59:18.634338 sshd[6875]: Accepted publickey for core from 10.200.16.10 port 59858 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:18.635627 sshd[6875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:18.642656 systemd-logind[1705]: New session 25 of user core. Jul 6 23:59:18.647658 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:59:19.222005 sshd[6875]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:19.229208 systemd[1]: sshd@22-10.200.8.46:22-10.200.16.10:59858.service: Deactivated successfully. Jul 6 23:59:19.233366 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:59:19.235898 systemd-logind[1705]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:59:19.237961 systemd-logind[1705]: Removed session 25. Jul 6 23:59:23.207989 systemd[1]: run-containerd-runc-k8s.io-1e2cfa6d8eb2f49efa98f878e42285df1020de33f34511338179683397067f89-runc.CzGGyG.mount: Deactivated successfully. Jul 6 23:59:24.340873 systemd[1]: Started sshd@23-10.200.8.46:22-10.200.16.10:49930.service - OpenSSH per-connection server daemon (10.200.16.10:49930). Jul 6 23:59:24.962775 sshd[6930]: Accepted publickey for core from 10.200.16.10 port 49930 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:24.968961 sshd[6930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:24.973432 systemd-logind[1705]: New session 26 of user core. Jul 6 23:59:24.980681 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:59:25.476443 sshd[6930]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:25.479862 systemd[1]: sshd@23-10.200.8.46:22-10.200.16.10:49930.service: Deactivated successfully. Jul 6 23:59:25.483298 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:59:25.485003 systemd-logind[1705]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:59:25.486407 systemd-logind[1705]: Removed session 26. Jul 6 23:59:30.593918 systemd[1]: Started sshd@24-10.200.8.46:22-10.200.16.10:55800.service - OpenSSH per-connection server daemon (10.200.16.10:55800). Jul 6 23:59:31.227911 sshd[6945]: Accepted publickey for core from 10.200.16.10 port 55800 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:31.229600 sshd[6945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:31.235539 systemd-logind[1705]: New session 27 of user core. Jul 6 23:59:31.239891 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 6 23:59:31.734237 sshd[6945]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:31.737630 systemd[1]: sshd@24-10.200.8.46:22-10.200.16.10:55800.service: Deactivated successfully. Jul 6 23:59:31.741878 systemd[1]: session-27.scope: Deactivated successfully. Jul 6 23:59:31.743821 systemd-logind[1705]: Session 27 logged out. Waiting for processes to exit. Jul 6 23:59:31.744868 systemd-logind[1705]: Removed session 27. Jul 6 23:59:36.858806 systemd[1]: Started sshd@25-10.200.8.46:22-10.200.16.10:55804.service - OpenSSH per-connection server daemon (10.200.16.10:55804). Jul 6 23:59:37.481255 sshd[6958]: Accepted publickey for core from 10.200.16.10 port 55804 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:37.482075 sshd[6958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:37.492042 systemd-logind[1705]: New session 28 of user core. Jul 6 23:59:37.494674 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 6 23:59:37.992191 sshd[6958]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:37.996657 systemd[1]: sshd@25-10.200.8.46:22-10.200.16.10:55804.service: Deactivated successfully. Jul 6 23:59:37.999116 systemd[1]: session-28.scope: Deactivated successfully. Jul 6 23:59:37.999964 systemd-logind[1705]: Session 28 logged out. Waiting for processes to exit. Jul 6 23:59:38.001363 systemd-logind[1705]: Removed session 28.